Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
JoshTa
@3eternus
Yes, that would work.
Vince Knight
@drvinceknight
And if you wanted to you could do the same with Ashlock's fingerprints, they were originally designed for the purpose you describe (I believe).
Note: the transitive fingerprint is very simple to interpret and analyse, this is not necessarily immediately the case for the Ashlock fingerprint.
If we can assist further let us know :)
JoshTa
@3eternus
Just having some issues with the plot seems to be quite buggy? I noted fingerprint.py had already a def for the plots (plt.tight_layout())
image.png
Vince Knight
@drvinceknight
Can you paste the code you're using to obtain that here and we can take a look. It might be that you need to resize the figure using standard matplotlib.
John Paul Mintz
@jpmintz01
@3eternus:
# import packages 
import axelrod as axl
import pandas as pd
import numpy as np

C, D = axl.Action.C, axl.Action.D #define C and D as axelrod objects

axl.seed(0)  # Set a seed

# convert the adversaries' match choices into a series of Axelrod-typed game choices (you would insert your own "adversary" choices into each of these)
action_dict = {'AI_choices': [D, D, D, C, C, D, D, D, C, D], 'HAI_choices': [C,D,C,C,D,C,C,C,C,D], 'Human_choices': [C,C,D,C,D,D,D,C,C,D]}

#run tournaments for each strategy in the axelrod library  against each adversary in the action_dict
for key, elem in action_dict.items(): 
    mock = axl.MockPlayer(actions=elem) #create a mock player (competitor)
    players = [s() for s in axl.strategies]  # Create other players
    players.insert(0,mock) #insert the mock player
    edge_list = [] #define the tournament space
    for i in range(1,(len(players))): #iterate through the players
        temp_item = (0,i)
        edge_list.append(temp_item) #define and assign the tournament space
    tournament = axl.Tournament(players, edges=edge_list, turns=len(elem), repetitions=1) #define the tournament
    results = tournament.play(filename=(key+'.csv'))  # Play the tournament
Note that this only outputs a list of strategies and associated match choices based on the adversary(ies)'s choices - you'll need to do some analysis to match your target player's actual choices to these. I used R, and am still working out my method.
Vince Knight
@drvinceknight
FWIW @jpmintz01 that's essentially what the transitive fingerprint does, the TransitiveFingerprint.fingerprint() method talks an optional filename argument if you did want to save the entire interaction data (but what it saves and. makes available is essentially that anyway).
Here's the source code where you can see it essentially doing what you've done above: https://github.com/Axelrod-Python/Axelrod/blob/master/axelrod/fingerprint.py#L425
JoshTa
@3eternus
@drvinceknight Hi, i used the code you provided, seemingly.
[s() for s in axl.demo_strategies]
for i, player in enumerate(neat_players):
tf = axl.TransitiveFingerprint(player, opponents=opponents)
data = tf.fingerprint(turns=20, repetitions=10)
p = tf.plot(displaynames=True)
p.savefig(f"plot
{i}.png") # Saves a plot for each player of interest
Vince Knight
@drvinceknight
Give me two seconds and I'll throw some code here that lets you reshape the figure.
Vince Knight
@drvinceknight
Sorry what I was thinking of doing doesn't work (my mistake), but all tf.plotis doing is doing a matplotlib imshow on tf.data (which is also the output of tf.fingerprint). It does it's best to choose sensible axes sizes but depending on what you have in opponents it might struggle so it's probably worth just plotting it yourself so to speak.
Someone with better matplotlib skills than me might know how to reshape the output of tf.plot but I don't have that at the tip of my mind and in your position I'd probably just plot tf.data (it's just a two d array of the cooperation rates).
Another (very lazy solution) would be to tweak your opponents so you get plots that look ok (I don't suggest this is a good solution!). For example:
import axelrod as axl
import matplotlib.pyplot as plt
import random
axl.seed(1)
player = axl.TitFor2Tats()
opponents = [s() for s in random.sample(axl.strategies, 25)]
tf = axl.TransitiveFingerprint(player, opponents=opponents)
data = tf.fingerprint(turns=50, repetitions=10)

tf.plot(display_names=True)
gives:
fp.png
(That's just a seeded random selection of 25 opponents)
Hope that helps. :+1:
JoshTa
@3eternus
indeed, just as mentioned before I have a collection of [players] I want to fringerprint
John Paul Mintz
@jpmintz01
@drvinceknight , thanks for this!
Vince Knight
@drvinceknight
:+1:
JoshTa
@3eternus
I fixed the plots by removing line 534 on fingerprint.py [ #fig.set_size_inches(width, height)
image.png
JoshTa
@3eternus
image.png
JoshTa
@3eternus
what are mind reader/controller strategies? I can t find any literature about them?
Marc
@marcharper
Those are "cheating strategies" that can manipulate their opponent's code or simulate their opponent directly
they were created early on in the axelrod library's development
JoshTa
@3eternus
ah well I think possible because Im using a EA algorithm, the player is a part of the population, but where I assume because traits, speciation it has fingerprinted this. There is no way the player can affect the code.
Vince Knight
@drvinceknight
This page gives you a list off all the strategies in the library with access to their source code: https://axelrod.readthedocs.io/en/stable/reference/all_strategies.html#axelrod.strategies.mindcontrol.MindBender
The cheating strategies usually "overwrite" the strategy method of the strategy they're playing against. I would not use them for your fingerprints.
JoshTa
@3eternus
oddly i dont think the fingerprint code is working correctly, for example finger printing against just basic strategies just obtaining nothing on the heatmap. I assume 1.0 (means high probability of cooperation)
Vince Knight
@drvinceknight
The fingerprinting strategy code has been pretty extensively used and tested so I'm confident it's working, perhaps it's just that against those strategies your strategies cooperate?
JoshTa
@3eternus
Yes so 1.0 [green] cooperation yes
Oddly the ylabel for the probability of cooperation is not appearing on the plot
JoshTa
@3eternus
My players cooperation rating/ratio plots over x number of turns just displays something very different.. 0.4-0.6 etc
JoshTa
@3eternus
I look forward to releasing this code, for others to try and see what they can create with it
Vince Knight
@drvinceknight
Hi all, @Nikoleta-v3 has put the following manuscript on the arXiv: https://arxiv.org/abs/1911.12112

Here's the abstract if anyone is interested:

Memory-one strategies are a set of Iterated Prisoner's Dilemma strategies that have been acclaimed for their mathematical tractability and performance against single opponents. This manuscript investigates best responses to a collection of memory-one strategies as a multidimensional optimisation problem. Though extortionate memory-one strategies have gained much attention, we demonstrate that best response memory-one strategies do not behave in an extortionate way, and moreover, for memory one strategies to be evolutionary robust they need to be able to behave in a forgiving way. We also provide evidence that memory-one strategies suffer from their limited memory in multi agent interactions and can be out performed by longer memory strategies.

Marc
@marcharper
Nice :)
JoshTa
@3eternus
https://axelrod.readthedocs.io/en/stable/tutorials/further_topics/ecological_variant.html Can anyone explain these graphs further? I believe I understand it to the show the stability of the each strategies but I cannot understand the color coding?
image.png
JoshTa
@3eternus
Just to note to all, here is my code for the NEAT algorithm, I believe I would be able to submit some new pre-trained networks but it will require some work, will be open to collaboration also. https://github.com/3eternus/NEAT-Axelrod/blob/master/main.py
if your interested, I would be happy to add you as collaborators
Vince Knight
@drvinceknight
The color coding of those graphs is just to help differentiate them. At each time point (vertical) there is a different proportion of this strategies in the population. In your particular example, the strategies listed in the top half die off.
Vince Knight
@drvinceknight
v4.8.0 has now been released with updated implementations of the first tournament strategies: https://github.com/Axelrod-Python/Axelrod/releases/tag/v4.8.0
Vince Knight
@drvinceknight