Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Erlend Langseth
    @Ploppz
    it's not the final solution, maybe it can be done better
    Erlend Langseth
    @Ploppz
    Using index as innovation id misses the benefits of innovation id - I don't think it's sufficient to serve as a historical marking. Because if two very different organisms that have the same amount of neurons, both add a neuron by mutation, then that neuron in both organisms will be considered to have the same historical marking.
    Using innovation id, I was also able to add the functionality to delete neurons by mutation. I think it's good to have.
    I should review this to make sure we add new information
    Erlend Langseth
    @Ploppz
    Thansk for the link. Just to be sure I'm clear: I did not change the innovation number system of connections, only neurons.
    Hugo Freire Gil
    @TLmaK0
    but in the paper, there isn't nothing about the innovation id in neurons, right? how was achieved in the original implementation?
    Hugo Freire Gil
    @TLmaK0
    I think I have understand what happens. In my implementation neuron doesn't not exists, only connections, and genes, you have added a Neuron, so you need to identify they.
    I think we should comeback to origin
    From my point of view, neurons doesn't exists in genes only connections, only when we activate an organism, the neurons has sense
    Erlend Langseth
    @Ploppz
    The real reason I added neuron genes, was to implement bias according to the paper "On the Dynamics of Small Continuous-Time Recurrent Neural Networks". Each neuron should have a bias. It is also done like this (neuron and connection genes) in neat-python
    it actually simplifies code somewhat
    imo
    (I just had to try different changes because I didn't know what was wrong, so I looked at for example how neat-python does things)
    hm, you are right the original paper, while it does talk about neuron genes, it doesn't use innovation numbers for these. But we should maybe also draw some inspiration from what state of the art implemenations do. (not sure how good neat-python really is, but i think it can solve XOR?)
    Erlend Langseth
    @Ploppz
    but I'm not really sure what practical difference it makes, to have neuron genes vs not (apart from how we implement bias). I mean at first I thoguht the former would be better but now I'm not sure, seems like maybe it could be insignificant
    Hugo Freire Gil
    @TLmaK0
    In the last try I did, already have bias connection created by a toggle_bias mutation. If you run "cargo run --release --example function_approximation --features=telemetry" you will see how neural network tries to approximate x^2 function, but hangs every time when it's near to solve. I think here is the problem, the network doesn't evolve when the fitness is very close. I think it is moving back and forth near to solve position.
    Erlend Langseth
    @Ploppz
    hm, interesting, because I seem to have the same problem with my own environment - it plateaus when it starts to get quite good
    Yeah I know you had bias connections. I changed it to have bias in neurons, because the CTRNN paper uses that, and also the python library. The added benefit of having neuron genes is that you can have mutations to delete neurons. The parameters that neat-python uses for XOR are actually add_neuron_pr = del_neuron_pr = 0.2, add_conn_pr = del_conn_pr = 0.5. I imagine this could facilitate faster evolution because it mutates more, while not necessarily drifting toward higher complexity. I mean compare with only having the probability of adding neurons and connections: I noticed that we have to set a very low probability like 0.002 because else it will very quickly grow in complexity.
    playX
    @playXE
    can somebody look at that issue: TLmaK0/rustneat#40 ?
    Erlend Langseth
    @Ploppz
    You might wanna debug-output your network. Maybe it doesn't have any (meaningful) connections.
    playX
    @playXE
    how I can debug that ?
    I tried to use telemetry feature but when I open browser I just see three green squares and that's all
    playX
    @playXE
    also, can I run evaluate_in not in parallel?
    playX
    @playXE
    @Ploppz I tried to use your branch but get this:
    error[E0428]: the name `_IMPL_SERIALIZE_FOR_ConnectionGene` is defined multiple times
      --> /Users/adelprokurov/.cargo/git/checkouts/rustneat-edbea00d81431b08/a80e622/src/nn/gene.rs:55:42
       |
    54 | #[derive(Debug, Copy, Clone, Serialize, Deserialize)]
       |                              --------- previous definition of the value `_IMPL_SERIALIZE_FOR_ConnectionGene` here
    55 | #[cfg_attr(feature = "telemetry", derive(Serialize))]
       |                                          ^^^^^^^^^ `_IMPL_SERIALIZE_FOR_ConnectionGene` redefined here
       |
       = note: `_IMPL_SERIALIZE_FOR_ConnectionGene` must be defined only once in the value namespace of this module
    
    error[E0119]: conflicting implementations of trait `nn::_IMPL_DESERIALIZE_FOR_NeuralNetwork::_serde::Serialize` for type `nn::gene::ConnectionGene`:
      --> /Users/adelprokurov/.cargo/git/checkouts/rustneat-edbea00d81431b08/a80e622/src/nn/gene.rs:54:30
       |
    54 | #[derive(Debug, Copy, Clone, Serialize, Deserialize)]
       |                              ^^^^^^^^^ conflicting implementation for `nn::gene::ConnectionGene`
    55 | #[cfg_attr(feature = "telemetry", derive(Serialize))]
       |                                          --------- first implementation here
    
    error: aborting due to 2 previous errors
    Erlend Langseth
    @Ploppz
    @playXE Which branch? You mean my fork? master here should work fine
    playX
    @playXE
    I mean your fork
    Erlend Langseth
    @Ploppz
    As for debugging you can print your network out in debug format and inspect (println!("{:?}", my_network);). Unless your network is too complex to reasonably understand of course
    but it should be easy to identify a network with no connections between input and output
    Strange that you get those errors. On master of my fork, cargo build yields no errors. Maybe I, or you, have to rustup update. What's your rustc version?
    playX
    @playXE
    I get these errors with telemetry feature enabled
    Erlend Langseth
    @Ploppz
    Ah. Yes, damn it, I forgot to update telemetry stuff in my fork >_>
    playX
    @playXE
    because of this:
    #[derive(Debug, Copy, Clone, Serialize, Deserialize)]
    #[cfg_attr(feature = "telemetry", derive(Serialize))]
    Also it's possible to run evaluate_in without parallel iteration?
    Erlend Langseth
    @Ploppz
    oh .. you could just remove that cfg_attr line
    In the upstream repo that is? This would have to change I believe https://github.com/TLmaK0/rustneat/blob/master/src/species_evaluator.rs#L20
    In my version, this would have to change https://github.com/Ploppz/rustneat/blob/master/src/population.rs#L138 - change all par_iter_mut() to iter_mut() that's all.
    playX
    @playXE
    Ok,thanks
    I will fork it for my purposes then :)
    Erlend Langseth
    @Ploppz
    I fixed the compilation error you got. But I haven't yet tested how telemetry works in my branch. (i neglected it during my development sadly)
    Erlend Langseth
    @Ploppz
    @TLmaK0 If you will merge my fork; I just gave an attempt to make telemetry work in my fork. Here: https://github.com/Ploppz/rustneat/blob/telemetry_attempt/src/population.rs#L98 I think it should do approximately the same as in upstream. But somehow nothing is shown in the telemetry, just 3 blank boxes. I would expect at least the fitness curve to work well. What do I miss? As for the visualization of neural networks, I didn't expect it to work because the genome is so different from upstream. (I would have to ask you to tell me the requirements of the format)
    Hugo Freire Gil
    @TLmaK0
    putting this telemetry!("fitness1", 1.0, format!("{}", self.champion_fitness)); should be enough to get fitness
    Erlend Langseth
    @Ploppz
    I do that on line 107
    playX
    @playXE
    @Ploppz could you take a look at this pr: Ploppz/rustneat#1 ?
    Should allow optionally evaluate fitness in parallel
    Erlend Langseth
    @Ploppz
    Some discussion about changes to my fork is taking place in that PR @TLmaK0 . Maybe of interest
    Erlend Langseth
    @Ploppz

    @TLmaK0 Are you available?
    As you can see I finally got around to break it down into smaller PRs.
    However I have a suggestion to an alternative approach; equivalent to multiple PRs if I understand correctly your goal.
    I understood that the reason you want more smaller PRs is that then you can more easily go through the changes one by one.
    I just made two PRs that follow each other, but I realize that github won't let you see the difference between these PRs. Actually right now I'm working on making only one single commit for each PR. So you can see the changes in each PR by looking at its commit.

    But that leads me to the question: since all these PRs will follow on each other, why not just make one big PR, but with one commit for each functionality?

    Erlend Langseth
    @Ploppz
    Done. TLmaK0/rustneat#43 I hope I made it easier to understand the changes now. I think the challenge is just the sheer amount of changes. And meanwhile I really believe that only the end result of my work is really worthy - all commits in-between are just part of the journey toward a better algorithm.
    Erlend Langseth
    @Ploppz
    oh and let me know if you need any help with rebasing your latest features/function_approximation on top of my branch. But I see you have been for example trying to fix ctrnn, which I believe I have done in the PR.
    @playXE https://github.com/TLmaK0/rustneat/pull/43/commits/6c9b615baa105b7df283c503d7c45d54c5bbe271 github doesn't properly register your username and email. I used the 'Author' from your original commits: playXE <adel.prokurov@protonmail.com>