by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 14 09:59
    drvinceknight labeled #1373
  • Sep 14 09:59
    drvinceknight unlabeled #1373
  • Sep 14 09:59
    drvinceknight commented #1373
  • Sep 11 21:59
    marcharper synchronize #1373
  • Sep 11 21:59

    marcharper on 1370

    Add a property based test for v… Run isort. (compare)

  • Sep 11 21:59
    marcharper closed #1374
  • Sep 11 20:19
    drvinceknight commented #1374
  • Sep 11 16:27
    marcharper commented #1374
  • Sep 11 16:26
    marcharper commented #1374
  • Sep 11 16:26
    marcharper commented #1374
  • Sep 11 15:41
    drvinceknight commented #1374
  • Sep 11 15:15
    marcharper commented #1374
  • Sep 11 11:57
    drvinceknight synchronize #1374
  • Sep 11 11:57

    drvinceknight on add-property-based-test-for-meta-player

    Run isort. (compare)

  • Sep 11 09:14
    drvinceknight commented #1373
  • Sep 11 09:11
    drvinceknight opened #1374
  • Sep 11 09:11

    drvinceknight on add-property-based-test-for-meta-player

    Add a property based test for v… (compare)

  • Sep 10 16:49
    marcharper labeled #1373
  • Sep 10 03:21
    marcharper commented #1370
  • Sep 10 03:21
    marcharper opened #1373
JoshTa
@3eternus
Yes so 1.0 [green] cooperation yes
Oddly the ylabel for the probability of cooperation is not appearing on the plot
JoshTa
@3eternus
My players cooperation rating/ratio plots over x number of turns just displays something very different.. 0.4-0.6 etc
JoshTa
@3eternus
I look forward to releasing this code, for others to try and see what they can create with it
Vince Knight
@drvinceknight
Hi all, @Nikoleta-v3 has put the following manuscript on the arXiv: https://arxiv.org/abs/1911.12112

Here's the abstract if anyone is interested:

Memory-one strategies are a set of Iterated Prisoner's Dilemma strategies that have been acclaimed for their mathematical tractability and performance against single opponents. This manuscript investigates best responses to a collection of memory-one strategies as a multidimensional optimisation problem. Though extortionate memory-one strategies have gained much attention, we demonstrate that best response memory-one strategies do not behave in an extortionate way, and moreover, for memory one strategies to be evolutionary robust they need to be able to behave in a forgiving way. We also provide evidence that memory-one strategies suffer from their limited memory in multi agent interactions and can be out performed by longer memory strategies.

Marc
@marcharper
Nice :)
JoshTa
@3eternus
https://axelrod.readthedocs.io/en/stable/tutorials/further_topics/ecological_variant.html Can anyone explain these graphs further? I believe I understand it to the show the stability of the each strategies but I cannot understand the color coding?
image.png
JoshTa
@3eternus
Just to note to all, here is my code for the NEAT algorithm, I believe I would be able to submit some new pre-trained networks but it will require some work, will be open to collaboration also. https://github.com/3eternus/NEAT-Axelrod/blob/master/main.py
if your interested, I would be happy to add you as collaborators
Vince Knight
@drvinceknight
The color coding of those graphs is just to help differentiate them. At each time point (vertical) there is a different proportion of this strategies in the population. In your particular example, the strategies listed in the top half die off.
Vince Knight
@drvinceknight
v4.8.0 has now been released with updated implementations of the first tournament strategies: https://github.com/Axelrod-Python/Axelrod/releases/tag/v4.8.0
Vince Knight
@drvinceknight
Vince Knight
@drvinceknight
One of my students referenced this talk in their coursework: https://www.youtube.com/watch?v=aMy3dYt5itE
A talk at PyCon Israel 2018 using the Axelrod library. (Haven't watched it yet.)
Owen Campbell
@meatballs
somebody on the python-uk irc channel was just asking if anyone else was seeing travis problems. I guess my answer is now "yes"
Vince Knight
@drvinceknight
Ah right :+1:
I was thinking something was wrong on our end and was going to need a fix... :laughing:
Owen Campbell
@meatballs
I now can't even see the axelrod project if I log into Travis. But that link you gave works just fine. Very odd
Oops. Sorry @marcharper. I didn't mean to click the button to re-request a review from you!
Vince Knight
@drvinceknight

I now can't even see the axelrod project if I log into Travis. But that link you gave works just fine. Very odd

Weird...

We'll move to GitHub actions and it'll be nice and self contained... (famous last words...)
There's a github repository and a notebook here: https://github.com/gsurma/prison_escape/blob/master/PrisonEscape.ipynb
Vince Knight
@drvinceknight
Cool.
Owen Campbell
@meatballs
b 58
T.J. Gaffney
@gaffney2010
I accidentally Axelrod-Python/Axelrod#1301 while continuous-integration/appveyor/pr was still running. (I thought it would block.) How can I run this now to make sure that it passes?
Marc
@marcharper
I restarted the build in appveyor
T.J. Gaffney
@gaffney2010
Just for curiosity, how do you do that?
Marc
@marcharper
You can log into Appveyor with your github account
calumrowan
@calumrowan
Hi everyone, I'm interested in running an Axelrod-style tournament for the Iterated Optional Pridoner's Dilemma. I understand that for this, I would need to create a new, 5 payoff game (R,P,S,T,L); can anybody point me in the right direction for starting this? I have read through the documentation, but I don't believe it covers creating a new game.
calumrowan
@calumrowan
Would it be a case of writing a new case (and changing the ColOutcomeType function) to include this L payoff, or would the alteration be more complex than that?
Vince Knight
@drvinceknight
I'm afraid as is the strategies would not be able to play the Optional PD. The optional PD has 3 strategies for players: (Cooperate, Defect or Abstain) so not only would you need to modify the game but you'd also need to modify all the strategies as they all are programmed to play Cooperate or Defect given a history of play.
Vince Knight
@drvinceknight
FYI, I'll be putting a new release together tomorrow :+1:
calumrowan
@calumrowan
Ah, that makes sense. Thanks Vince!
Marc
@marcharper
@calumrowan we are looking into supporting more games in a future release, so feel free to open an issue with some relevant details
Vince Knight
@drvinceknight
Axelrod-Python/Axelrod#1309 (the move to GitHub actions) is ready for review now :) :+1:
Marc
@marcharper
Hey, we just passed the 5 year birthday of Axelrod: https://github.com/Axelrod-Python/Axelrod/graphs/contributors
Vince Knight
@drvinceknight
Wow!
Happy birthday :cake:
Vince Knight
@drvinceknight
They use your Moran process plots in the paper @marcharper :)
(I'll open a PR adding it to the citations.)
Vince Knight
@drvinceknight
Version 4.9.0 has been released: https://pypi.org/project/Axelrod/
Vince Knight
@drvinceknight
4.9.1 has been released (it's a bug fix of 4.9.0).
Nikoleta Glynatsi
@Nikoleta-v3
The tournament results with the latest version 4.9.1 are now available: https://github.com/Axelrod-Python/tournament
Vince Knight
@drvinceknight
Thanks @Nikoleta-v3 :+1:
Marc
@marcharper
Interesting that OmegaTFT does so well now in the standard and noisy tournaments
Vince Knight
@drvinceknight
Yeah...
A neat project would be to visualise how strategies have performed through the versions of the library... I'll get my git repo scraping script back out and to put that together at some point...