Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Giuseppe Chindemi
    @pineider
    Hey Werner
    how are you?
    I was having a look at the algorithm eaAlphaMuPlusLambda in the repo, but it seems that something is missing
    What is the status of the repo?
    Werner Van Geit
    @wvangeit
    I still have to test everything
    but what is wrong with eaAlphaMuPlusLambda ?
    Giuseppe Chindemi
    @pineider
    well there is no alpha :D
    I see the alpha in the selector
    but I don't understand why it is there
    Werner Van Geit
    @wvangeit
    Well, yes, indeed, it is confusing
    Still should find a solution for this
    Problem is that in our case the selector and algorithm work in unison
    I should probably add an alpha parameter to the algorithm and pass it to the selector
    but that breaks in case the selector doesn't have this parameter
    (which I can probably check)
    In the ibea case the selector also doesn't select the parents, but also changes the population
    In any case, as you can see, atm
    if alpha is None:
    alpha = len(population)
    so would should leave the alpha parameter alone ftm
    Russell Jarvis
    @russelljjarvis
    Hi Werner and others, I am just wondering how you can execute a custom defined evaluate method with arguments in a for loop? In all of the examples I have seen the evaluate method is kind of made available to call from the GA with a statement like:
    toolbox.register("evaluate", evaluate)
    Russell Jarvis
    @russelljjarvis
    However for multiple different reasons I would like to execute a genetic algorithm in a loop. And I would like to tune a different number of parameters in a model in different iterations in a for loop.
    Is there an alternative syntax for calling the evaluate method with arguments?
    I will paste an example below:
    parameters=['T','Nap','h']
    list_of_1parameter=[-10,-2]
    for i in parameters:
    target_versus_error={}
    target_versus_errorls=[]
    from scoop import futures
    for i in parameters:
    if name == 'main':
    toolbox = base.Toolbox()
    toolbox.register("map", futures.map)
    toolbox.register("uniformparams", uniform, LOWER, UPPER, IND_SIZE)
    creator.create("Individual", Individual, fitness=creator.FitnessMin)
    toolbox.register("Individual",tools.initIterate,creator.Individual,toolbox.uniformparams)
    toolbox.register("population", tools.initRepeat, list, toolbox.Individual) #toolbox.register("evaluate",toolbox.Individual,target_versus_error,target_versus_errorls,IND_SIZE)
    toolbox.register("evaluate",evaluate) toolbox.register("mate",deap.tools.cxSimulatedBinaryBounded,eta=ETA,low=LOWER,up=UPPER)
           # Register the mutation operator
           # Experiment with changing the strategy.
           toolbox.register("mutate", deap.tools.mutPolynomialBounded, eta=ETA,low=LOWER, up=UPPER, indpb=0.1)
           # Register the variate operator
           toolbox.register("variate", deap.algorithms.varAnd)
           #select the best 
           sel=toolbox.register("select", ind_selector, selector=tools.selBest)
           # Generate the population object
           pop = toolbox.population(n=MU)
           hof = tools.HallOfFame(1)
           stats = tools.Statistics(lambda ind: ind.fitness.values)
           of=Objective_functions()
           for gen in range(NGEN):
               offspring = algorithms.varAnd(pop, toolbox, cxpb=CXPB, mutpb=1-CXPB)
               fits = toolbox.map(toolbox.evaluate, offspring)
               for fit, ind in zip(fits, offspring):             
                   ind.fitness.values = fit
                   print ' generation # ', gen
                   print ' fitness, index ', fit, ind, offspring
               pop = toolbox.select(offspring, k=len(pop))
           top10 = tools.selBest(pop, k=10)
    Russell Jarvis
    @russelljjarvis
    Sorry about the formatting above.
    I would like the line toolbox.register("evaluate",evaluate) to be a more flexible call to my custom evaluate method, which executes with different values depending on the current index of the top level for loop: for i in parameters:
    Russell Jarvis
    @russelljjarvis
    One reason I want to execute a GA in a for loop is because I want to do a small sweep of possible GA parameters: population_size,offspring_size, cross_over_rate, mutation_rate etc. Another reason I would like to be able to call evaluate with changing arguments is because I would like to execute the GA by launching python with different command line arguments each time it starts, and I want the number of parameters to be optimised to be different on different launches.
    Russell Jarvis
    @russelljjarvis
    For this reason I have been trying to learn from the BluePyOpt source code. I have noticed that the opt_l5pc file can be launched with command line arguments, and also that it must therefore must be able to execute the evaluator method using different arguments according to different contexts. However I don't understand how this works. Using grep I found the file tests/test_evaluators.py: and the line evaluator = bluepyopt.evaluators.Evaluator(). I then opened the file: evaluators.py. I can see that this file implements an abstract method somehow, and I think that the existence of this file is the reason why bluepyopt is able call the evaluate method with different parameters as appropriate. However I have never seen abstract methods before and I don't understand the evaluators.py file and what it does, I would just like to know: Am I on the right track? Is there a simpler approach (things seem to be much too complicated for something which should be much more simple)?
    Werner Van Geit
    @wvangeit
    hi @russelljjarvis, am at a FENS workshop atm, and the answer is a bit complex. I'll answer as soon as find some time.
    Russell Jarvis
    @russelljjarvis
    Great. That would be awesome thanks:) This issue is still a priority for me.