Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 30 2019 17:07
    mstimberg commented #1047
  • Jan 30 2019 16:53
    thesamovar commented #1047
  • Jan 30 2019 15:40
    mstimberg commented #1047
  • Jan 30 2019 12:16
    daphn3cor closed #1048
  • Jan 30 2019 12:15
    daphn3cor commented #1048
  • Jan 30 2019 10:33
    mstimberg synchronize #1047
  • Jan 30 2019 10:33

    mstimberg on function_compiler_kwds

    Add .c and .pxd to the list of … Make Cython compilation quiet a… (compare)

  • Jan 30 2019 09:57
    mstimberg edited #1047
  • Jan 30 2019 09:56
    mstimberg synchronize #1047
  • Jan 30 2019 09:56

    mstimberg on function_compiler_kwds

    Document the new features for u… (compare)

  • Jan 30 2019 09:56
    mstimberg commented #1048
  • Jan 30 2019 07:59
    daphn3cor commented #1048
  • Jan 29 2019 16:30
    mstimberg commented #1048
  • Jan 29 2019 15:42
    daphn3cor opened #1048
  • Jan 29 2019 14:44
    mstimberg synchronize #1047
  • Jan 29 2019 14:44

    mstimberg on function_compiler_kwds

    Include external source files a… (compare)

  • Jan 29 2019 13:48
    mstimberg synchronize #1047
  • Jan 29 2019 13:48

    mstimberg on function_compiler_kwds

    Minor bug-fixes for new additio… Support external source files f… Add tests for functions with ex… (compare)

  • Jan 25 2019 17:34
    thesamovar commented #1047
  • Jan 25 2019 17:26
    mstimberg opened #1047
wxie2013
@wxie2013
Thanks Marcel
André Jacques
@Havarem
Hi everybody. Is this the place to ask questions about how to use brian2? Or more about the development of brian2?
VigneswaranC
@Vigneswaran-Chandrasekaran
Hi, I think, better place would be https://brian.discourse.group/ and also you can refer to Brian documentation that may likely have the answers you're looking for ;)
wxie2013
@wxie2013
After defining a max_delay of a synapse, is it still necessary to specifically randomize the delay or Brian2 will do it automatically? For example:
S.max_delay = '10*ms'
S.delay = 'rand() * 10ms'
sholevs66
@sholevs66
Hi guys :) , I'm looking into implementing brian2 functionality on my own, so basically creating my own SNN simulation.
As of regarding the functionality of a simple single LIF neuron:
should I approximate the differential equation using some kind of numerical method? will simple Euler method be good enough?
Marcel Stimberg
@mstimberg
Hi @wxie2013 : not quite sure where max_delay is coming from, is this maybe a Brian 1 thing? In Brian2, heterogeneous delays are the default and you'll have to initialize them explicitly. So you only need your second line to set random delays between 0 and 10ms.
4 replies
@sholevs66 you mean you want to implement a LIF model without using Brian? In that case, yes, you need some kind of method to solve the equation (except for a very simple model where you might actually be able to solve the equation analytically, this is what Brian's exact method does). Euler is simple and fast but not very accurate, but for a typical LIF model it is usually good enough. It depends a lot on the time step you use, Brian's default time step of 0.1ms should work for typical time constants.
wxie2013
@wxie2013
When using multiple thread, it shows: "WARNING OpenMP code is not yet well tested, and may be inaccurate. [brian2.devices.cpp_standalone.device.openmp]". Is there anything we should worry in this case?
Marcel Stimberg
@mstimberg
No, we show this whenever you use OpenMP to deny any responsibility if things go wrong ;) More serious: the OpenMP implementation is fairly well-tested, and while there can of course be bugs (as in the rest of Brian), you can use it for normal simulations without any worries. Why do we raise this warning in the first place? Because there are a few non-standard use cases where it can give the wrong results. Linked variables can be potentially problematic, if they link several times to the same index, and the same goes for the underlying add_reference function. Here is an example showing this problem. The example is quite contrived and I have not seen any issues with "real" use cases, though. So again, in general I would not worry, but if you use a very complex setup with things like linked variables, it might be worth running a few simulations with and without OpenMP and see whether there is any difference (and let us know if there is!).
Denis Alevi
@denisalevi
Oh well, I guess I should be aware of that too. Brian2CUDA prints [1, 1]... Any other places where this could be relevant?
Marcel Stimberg
@mstimberg
This is the only example I could come up with. But I am actually not 100% sure whether this is something that we should actually support/fix. It's similar to using a reset statement with a scalar variable, e.g. if you write reset='scalar_variable += 1, would you expect it to add 1 per reset or 1 per spike? We raise an error in this case, and I think for the example I linked above, it would probably make more sense to raise an error as well.
Denis Alevi
@denisalevi
Yeah, I see your point. I would have expected the reset to add 1 per spike and the linked variable to add 1 per running neuron (leading to [500, 500]). But if it doesn't, it should probably race an error rather than just work, even for C++ openmp? Are there any cases that would not work easily without this mechanism?
Marcel Stimberg
@mstimberg
So for the linked variable, that is what is happening currently (outside of OpenMP/Brian2CUDA). If we support it then this should be the result, I agree, but not sure that it is worth having this. For reset, we raise an error so you cannot use this formulation. I don't really know use cases for this mechanism, but who knows :) We could also have some compromise, e.g. raise an error but provide a legacy preference to reinstall the old behaviour so that old scripts could still use it.
Denis Alevi
@denisalevi

That would mean raising an error and allowing it with legacy pref only in runtime and cpp (still with error in openmp/brian2cuda)? Probably the easiest to implement solution? Or how difficult would it be to implement the same behaviour in openmp and brian2cuda? for brian2cuda it would be just adding an atomic operation at the right place. which would give a good performance hit in your example, but would at least work. Not sure about openmp? And if we implement it, we could still raise an error and only use the atomic version when the legacy pref is given.

But honestly, I am all for less additional work right now :D I just want brian2cuda to be released. So I'm all for raising an error and only allowing legacy for cpp without openmp. And maybe open an issue and point to it in the error message. If it actually ever comes up for someone, we can still implement it?

Marcel Stimberg
@mstimberg
I'd say the main work is to actually detect the situation. Then we can easily raise a general error or a NotImplementedError for Brian2CUDA or OpenMP. It wouldn't be too difficult to make this work in OpenMP either, but I don't feel that it is worth it right now.
wxie2013
@wxie2013
Thanks for the answer and nice to know there is the Brian2Cuda package exist. Is there any limitation on this package? In another word, can one transfer any Brian2 code to Brian2Cuda, e.g. heterogeneous axonal delays? What's the advantage of Brian2Cuda to Brian2Genn? Why not try openMP so one is not limited to Nvidia GPU? When is it expected to be released?
wxie2013
@wxie2013
Typo correction above: I mean openCL instead of cuda.
Marcel Stimberg
@mstimberg
Sorry, I didn't see the messages here. The idea of Brian2CUDA is to be a more "classical" backend to Brian2, i.e. to work basically like the C++ standalone mode but for CUDA. This would also mean that it supports all the features that Brian2 standalone supports, e.g. heterogeneous delays (that being said, the GeNN simulator does support them as well now, we will have to update the Brian2GeNN interface to benefit from this). But at the moment, Brian2CUDA is still work-in-progress and not yet ready for general use. @denisalevi might be able to say more about this, but I think for now the release date is still: "soon, hopefully" :)
A stable Brian2CUDA would be a great addition to the "Brian toolbox", but I think Brian2GeNN will still have its place even after the release of Brian2CUDA. Brian2GeNN is slightly cumbersome in the way that it goes Brian2 code → GeNN code → CUDA code, but this also means that we can automatically benefit from improvements in GeNN without having to do anything on the Brian side. For example, things like multi-GPU support, or indeed OpenCL (which I think the GeNN team is working on). Hope that makes things clearer.
Denis Alevi
@denisalevi
I didn't see the message either, sorry for the late replay. But I don't have much to add to @mstimberg. Just that I'm confident that we have a first release by the end of this year ;)
wxie2013
@wxie2013
nice. Thanks for the clarification.
wxie2013
@wxie2013
Is there a rough projection on when the feature of adding or removing synapses during a run will be implemented?
Marcel Stimberg
@mstimberg
It's on the list but no one is currently working on it. So I am afraid I cannot give any real time frame. I hope to implement an intermediate solution in the shorter term where you can call a disconnect function between runs.
wxie2013
@wxie2013
Thanks
Rohith Varma Buddaraju
@rohithvarma3000
Hello, I am Rohith Varma Buddaraju (@rohithvarma3000) and I am a new contributor to this project. I am really interested to start contributing to this project but I am not sure where to start. Could someone please suggest a good first issue for me to work on and help me work on it.
Marcel Stimberg
@mstimberg
Hi Rohith, apologies for not seeing this message earlier. We'd be very grateful for contributions, I guess the best start are issues tagged both with "easy" and with "suggested contribution". But I see that you already replied on a number of issues on github, I'll head over there and reply directly.
Marcel Stimberg
@mstimberg
Oh and by the way, additions to the documentation and new examples would be highly welcomed contributions as well.
atefeasd
@atefeasd
Hi. In my simulation, I need to determine the number of synapses that each presynaptic and postsynaptic neuron has. But in the using of N_incoming and N_outgoing variables in 'Synapses' function I receive this error : TypeError: init() got an unexpected keyword argument 'N_incoming'.
Marcel Stimberg
@mstimberg
Hi @atefeasd , the N_incoming and N_outgoing variables can be used to determine the number of incoming/outgoing synapses after creating them with the connect call. I am not sure whether that is what you want. Do you maybe instead want to fix the number of synapses per neuron, e.g. something like random connections but X connections per neuron?
atefeasd
@atefeasd
Yes. I want to create synapses between two neuron groups. But on the certain condition: 3 neurons in the Group B receive inputs from 4 neurons of Group A while each neuron group has 10 neurons.
Marcel Stimberg
@mstimberg
Ok. I am not sure I understand the connectivity structure, though: do you mean that most neurons in A and B are not connected at all, and there is a subset of 3 neurons in group B that is fully all-to-all connected to all neurons in a subset of 4 neurons in group A?
atefeasd
@atefeasd
Thanks for your reply. Actually what I have in mind is that: 3 neurons of group B are randomly connected to 4 neurons of group A.
Marcel Stimberg
@mstimberg
Apologies, but I still do not understand what this means. 3 neurons out of 10 are connected to 4 neurons out of 10, but not all-to-all? So there are less than 12 connections in total?
atefeasd
@atefeasd
Let me tell you what I want to do exactly. What I want to do is to simulate two neuronal groups in scales of 10^5 entities. Now, I want each neuron in a randomly chosen subset of the receiver neurons (say in group B) to receive inputs from all neurons in another randomly chosen subset of the sender neurons (group A). I want to update this A-subset for each neuron in B-subset. Also, I want the size of the subsets of A and B, to be 30% and 40% of the group-size of A and B, respectively.
Marcel Stimberg
@mstimberg
Thanks for the clarification, but isn't this what I said earlier, i.e. " most neurons in A and B are not connected at all, and there is a subset of 3 neurons in group B that is fully all-to-all connected to all neurons in a subset of 4 neurons in group A?"?
But that's not that important, I hope I do understand the idea.
So, there are two ways to implement this:
The first way is to construct the connectivity pattern yourself using Python and numpy, and end up with two lists (or 1-dimensional numpy arrays): one for the pre-synaptic indices, and one for the post-synaptic indices. E.g. if you select neurons 1, 3, 5 from group A and neurons 2, 4, 6, 8 from group B, you should end up with two lists like this:
pre_synaptic = [1, 3, 5, 1, 3, 5, 1, 3, 5, 1, 3, 5]
post_synaptic = [2, 2, 2, 4, 4, 4, 6, 6, 6, 8, 8, 8]
You can then plug these into a connect call:
synapses.connect(i=pre_synaptic, j=post_synaptic)
Marcel Stimberg
@mstimberg
The other way is a bit more "Brian-style", and can be useful because it gives you an easy way to reference the connected neurons later:
In this approach, you will add a new parameter to the equations for group A and group B, let's call that parameter selected:
''' # neuron equations
# ...
selected : boolean (constant)
'''
You can then set this parameter to True for a subset of the neurons in A and in B:
# Variant 1
group_A.selected = 'rand() < 0.3'
# Variant 2
group_A[np.random.choice(len(group_A), size=int(0.3*len(group_A)), replace=False)
# ...same thing for group B with 0.4
Marcel Stimberg
@mstimberg
The difference between the two variants is that in the first you select each of the neurons in group A with 30% probability, which will give you around 30% of the population, but the actual number is random. In variant 2, you select exactly 30%.
Once you have set the selected parameter on both groups, the actual connection statement becomes very simple:
synapses.connect('selected_pre and selected_post')
Hope that helps! BTW: Our forum might be a better place for such questions since the answer could be very useful for other users as well. https://brian.discourse.group
atefeasd
@atefeasd
Dear Marcel, Thank you very much for the comprehensive reply. Although, I didn't mean to connect them all to all but rather "one to all", I got the general idea to how to specify my conditions. Thanks again!
Marcel Stimberg
@mstimberg
Ah, I think I finally understand, sorry for being a bit slow :) If the group of neurons in A is different for each neuron in B, then you will have to use the first approach, i.e. determine the indices yourself.
atefeasd
@atefeasd
Thank you very much. Yes that is what I want. Excuse me if I have not been clear enough regarding my question.