Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 30 2019 17:07
    mstimberg commented #1047
  • Jan 30 2019 16:53
    thesamovar commented #1047
  • Jan 30 2019 15:40
    mstimberg commented #1047
  • Jan 30 2019 12:16
    daphn3cor closed #1048
  • Jan 30 2019 12:15
    daphn3cor commented #1048
  • Jan 30 2019 10:33
    mstimberg synchronize #1047
  • Jan 30 2019 10:33

    mstimberg on function_compiler_kwds

    Add .c and .pxd to the list of … Make Cython compilation quiet a… (compare)

  • Jan 30 2019 09:57
    mstimberg edited #1047
  • Jan 30 2019 09:56
    mstimberg synchronize #1047
  • Jan 30 2019 09:56

    mstimberg on function_compiler_kwds

    Document the new features for u… (compare)

  • Jan 30 2019 09:56
    mstimberg commented #1048
  • Jan 30 2019 07:59
    daphn3cor commented #1048
  • Jan 29 2019 16:30
    mstimberg commented #1048
  • Jan 29 2019 15:42
    daphn3cor opened #1048
  • Jan 29 2019 14:44
    mstimberg synchronize #1047
  • Jan 29 2019 14:44

    mstimberg on function_compiler_kwds

    Include external source files a… (compare)

  • Jan 29 2019 13:48
    mstimberg synchronize #1047
  • Jan 29 2019 13:48

    mstimberg on function_compiler_kwds

    Minor bug-fixes for new additio… Support external source files f… Add tests for functions with ex… (compare)

  • Jan 25 2019 17:34
    thesamovar commented #1047
  • Jan 25 2019 17:26
    mstimberg opened #1047
Rihana Naderi
@RNaderi

Hi @RNaderi . Would you mind asking this question on https://brian.discourse.group ? It's not a simple answer, and I think others could benefit from it. Thanks!

for sure. I thought it would be short. That was why I asked here. Thank you.

fededalba
@fededalba
Hello! I need to run 50 long simulations while changing a parameter and i need to change temporal step after the first 10. I have defined a Clock object and passed it to all my objects, then i have defined a Network object so i can store, change the parameter and then restore with the new value. At the tenth cycle i change the clock in this way clk.dt_ = 0.00001. (because using clk.dt = 0.001 ms doesnt change anything). I was wondering if there is an easier way to do it without using private attributes.
and if it is correct as procedure or i risk to obtain wrong result?
Marcel Stimberg
@mstimberg
Could you maybe give a little example with some minimal example code (might be better on https://brian.discourse.group)? I did not quite understand in what way you store/restore/run and when exactly you change the clock. Note that clk.dt_ is not a private attribute (that would be clk._dt), the ..._ syntax is just the value without the units, and normally setting clk.dt_ = ... or clk.dt = 0.00001 or clk.dt = 0.001*ms should do the exact same thing. That value seems to be a bit on the tiny side, though.
Rihana Naderi
@RNaderi
Hi all, this is my poissoninput : p=PoissonInput(neurons,'g_e',1, 5Hz, 1nS) . I want to run it again after first run with a new rate(p.rate=3*Hz),but I recieve a message :can't set attribute. how can I do this?
Marcel Stimberg
@mstimberg
Hi @RNaderi . The PoissonInput class cannot change its rate between runs. Instead, you can create a new PoissonInput class for the new rate and discard the previous one.
Rihana Naderi
@RNaderi
Thanks @mstimberg but when I recreate it with new rate, I receive this message : "The magic network contains a mix of objects that has been run before and new objects, Brian does not know whether you want to start a new simulation or continue an old one". I want to feed poisson-distributed spike train input based on different rates within some runs, in fact the only thing should be changed is my poisson input.what is your suggestion?
Marcel Stimberg
@mstimberg
Ah sorry, I did not think about that. In cases like this you have to be more explicit about what the components of your model are, by creating a Network object. Something like this should work:
# .. define network
p = PoissonInput(...)
net = Network(collect())  # create network with all objects
net.run(...)  # run first simulation
net.remove(p)  # remove previous PoissonInput
p = PoissonInput(...) # create new PoissonInput
net.add(p)  # add the new object to the network
net.run(...) # run new simulation
Rihana Naderi
@RNaderi
Thanks @mstimberg . one more question. if I remove and add input each time, do other variables such as "weights" reset or is this true only for input and simulation continues with the last values of all state and internal variables?
Rihana Naderi
@RNaderi
Since I don't have deep understanding of Poisson input , I wanted to ask you if I generate 2 PoissonInput with the same rate(with the same initialization), will the result be the same ? I mean in terms of correlation of input spikes.
Marcel Stimberg
@mstimberg
If you remove/add elements to an existing network as in my example, everything else in the network is unaffected. Synaptic weights, state variables, etc. are all unchanged, and the simulation continues where it left off. I did not think of it earlier, but if you want to keep your previous approach without creating a Network object, an alternative would be to create all your inputs in the beginning, but only make one of them active at a time. Something along the lines of
# ... define network
p1 = PoissonInput(...)
p2 = PoissonInput(...)
p2.active = False  # switch off second input
run(...)
# switch from first to second input
p1.active = False
p2.active = True
run(...)

Since I don't have deep understanding of Poisson input , I wanted to ask you if I generate 2 PoissonInput with the same rate(with the same initialization), will the result be the same ? I mean in terms of correlation of input spikes.

Note sure I undertand. The results will not be exactly the same (different random numbers), but the statistics are the same in both cases. The spikes are uncorrelated.

Rihana Naderi
@RNaderi

If you remove/add elements to an existing network as in my example, everything else in the network is unaffected. Synaptic weights, state variables, etc. are all unchanged, and the simulation continues where it left off. I did not think of it earlier, but if you want to keep your previous approach without creating a Network object, an alternative would be to create all your inputs in the beginning, but only make one of them active at a time. Something along the lines of

# ... define network
p1 = PoissonInput(...)
p2 = PoissonInput(...)
p2.active = False  # switch off second input
run(...)
# switch from first to second input
p1.active = False
p2.active = True
run(...)

How interesting. Thank you very much.

Since I don't have deep understanding of Poisson input , I wanted to ask you if I generate 2 PoissonInput with the same rate(with the same initialization), will the result be the same ? I mean in terms of correlation of input spikes.

Note sure I undertand. The results will not be exactly the same (different random numbers), but the statistics are the same in both cases. The spikes are uncorrelated.

Is there any way to monitor poisoninput in the way that be used for PoissonGroup (SpikeMonitor(Poissongroup)) ?

Rihana Naderi
@RNaderi

Ah sorry, I did not think about that. In cases like this you have to be more explicit about what the components of your model are, by creating a Network object. Something like this should work:

# .. define network
p = PoissonInput(...)
net = Network(collect())  # create network with all objects
net.run(...)  # run first simulation
net.remove(p)  # remove previous PoissonInput
p = PoissonInput(...) # create new PoissonInput
net.add(p)  # add the new object to the network
net.run(...) # run new simulation

when I am using this way, after the first run I recieve this message : "neurongroup has already been run in the context of another network. Use add/remove to change the objects in a simulated network instead of creating a new one." should I also remove neuronGroup each time?

Marcel Stimberg
@mstimberg
Are you sure you are not creating a second Network object?
Here's a minimal example that works:
from brian2 import *

G = NeuronGroup(1, 'dv/dt = -v/(10*ms) : 1')
p1 = PoissonInput(G, 'v', 10, 50*Hz, 0.1)
state_mon = StateMonitor(G, 'v', record=0)
net = Network(collect())
net.run(100*ms)
net.remove(p1)
p2 = PoissonInput(G, 'v', 10, 5*Hz, 0.1)
net.add(p2)
net.run(100*ms)
plt.plot(state_mon.t/ms, state_mon.v[0])
plt.show()
Rihana Naderi
@RNaderi

Are you sure you are not creating a second Network object?

Yes,I'm sure. I've used once net = Network(collect()) after all my monitoring variabales. this code you sent me works for me, But I recieve that error for my code.

Marcel Stimberg
@mstimberg

Is there any way to monitor poisoninput in the way that be used for PoissonGroup (SpikeMonitor(Poissongroup)) ?

It is not as straightforward, since it does not generate any individual events/spikes, but instead determines the total number of spikes for each time step (this is much faster if you have several neurons, i.e. N >> 1. If this is not the case, then rather use a PoissonGroup). If the PoissonInput is the only thing that updates the target variable (g_e in your earlier example), then you can use a StateMonitor to observe that variable and see the effect of PoissonInput. By comparing the value before and after the update, you get its effect. Here's how to update my earlier example to plot the PoissonInput contribution:

from brian2 import *

G = NeuronGroup(1, 'dv/dt = -v/(10*ms) : 1')
p1 = PoissonInput(G, 'v', 10, 50*Hz, 0.1)
state_mon = StateMonitor(G, 'v', record=0)
poisson_mon_before = StateMonitor(G, 'v', record=0, when='before_synapses')
poisson_mon_after = StateMonitor(G, 'v', record=0, when='after_synapses')
net = Network(collect())
net.run(100*ms)
net.remove(p1)
p2 = PoissonInput(G, 'v', 10, 5*Hz, 0.1)
net.add(p2)
net.run(100*ms)
fig, (ax_top, ax_bottom) = plt.subplots(2, 1, sharex=True)
ax_top.plot(state_mon.t/ms, state_mon.v[0])
ax_bottom.plot(poisson_mon_before.t/ms, poisson_mon_after.v[0] - poisson_mon_before.v[0])
plt.show()
poisson_input.png
The lower plot shows the PoissonInput contribution.
If there is something else that updates the target variable, e.g. a Synapses object, then you have to use when='synapses' for the StateMonitors, and use the order argument of the StateMonitors and the PoissonInput to make sure that the order of operations is : first monitor → PoissonInput → second monitor (see https://brian2.readthedocs.io/en/stable/user/running.html#scheduling)
Rihana Naderi
@RNaderi
Well-explained. I really appreciate your time and great ideas. @mstimberg
Rihana Naderi
@RNaderi
I would be grateful if you could take a look at my issue in this link : https://brian.discourse.group/t/issues-with-spikegeneratorgroup/626/1
Rihana Naderi
@RNaderi
consider a neurongroup : Neurons=NeuronGroup(N, eqs , threshold='v>v_thr' ) exc_neurons = neurons[:N_e]
inh_neurons = neurons[N_e:] , I want to give 2 different "v_thr" for excitatory and inhibitory neurons ( N=N_e+N_i ). how can I define V>v_thr as input parameter in neuron equation?
Rihana Naderi
@RNaderi

I would be grateful if you could take a look at my issue in this link : https://brian.discourse.group/t/issues-with-spikegeneratorgroup/626/1

I've just solved this problem with adding an offset over run to spike_times of each input.

Marcel Stimberg
@mstimberg

I would be grateful if you could take a look at my issue in this link : https://brian.discourse.group/t/issues-with-spikegeneratorgroup/626/1

I've just solved this problem with adding an offset over run to spike_times of each input.

Great, that's what I would have suggested :blush: Could you answer/close your question on the discourse group as well, please?

consider a neurongroup : Neurons=NeuronGroup(N, eqs , threshold='v>v_thr' ) exc_neurons = neurons[:N_e]
inh_neurons = neurons[N_e:] , I want to give 2 different "v_thr" for excitatory and inhibitory neurons ( N=N_e+N_i ). how can I define V>v_thr as input parameter in neuron equation?

You can define an individual threshold for each neuron, by adding the threshold as a parameter to the equations (as in the second example in the documentation. Then you can write exc_neurons.v_thr = ... and inh_neurons.v_thr = ...)

Rihana Naderi
@RNaderi

I would be grateful if you could take a look at my issue in this link : https://brian.discourse.group/t/issues-with-spikegeneratorgroup/626/1

I've just solved this problem with adding an offset over run to spike_times of each input.

Great, that's what I would have suggested :blush: Could you answer/close your question on the discourse group as well, please?

For sure. I'll do it.

Rihana Naderi
@RNaderi

consider a neurongroup : Neurons=NeuronGroup(N, eqs , threshold='v>v_thr' ) exc_neurons = neurons[:N_e]
inh_neurons = neurons[N_e:] , I want to give 2 different "v_thr" for excitatory and inhibitory neurons ( N=N_e+N_i ). how can I define V>v_thr as input parameter in neuron equation?

You can define an individual threshold for each neuron, by adding the threshold as a parameter to the equations (as in the second example in the documentation. Then you can write exc_neurons.v_thr = ... and inh_neurons.v_thr = ...)

exc_neurons.reset='v = -65mV'
inh_neurons.reset='v = -60
mV' . I had done them but I recieved this message: Could not find a state variable with name "reset". Use the add_attribute method if you intend to add a new attribute to the object.

Marcel Stimberg
@mstimberg
If you want to change the reset, then you have to add a variable of that name to the equations, same as for the threshold. I wouldn't call it reset, but rather something like v_reset. I.e., add v_reset : volt (constant) to the equations, then you can do exc_neurons.v_reset = -65*mV and the same for the inhibitory neurons.
Rihana Naderi
@RNaderi
Thanks @mstimberg I meant for ">" on equation. exc_neurons.v_thr ='v>-65' is it correct ? and should it be defined as constant ?
Rihana Naderi
@RNaderi
when I was using :exc_neurons.v_reset = -65mV or exc_neurons.v_reset='v=-65mV' , for both , I recieve this error over simulation : Parsing the statement failed: v_reset
Marcel Stimberg
@mstimberg

Thanks @mstimberg I meant for ">" on equation. exc_neurons.v_thr ='v>-65' is it correct ? and should it be defined as constant ?

No, this is not correct. Your threshold condition in the neuron group should be v > v_thr and v_thr : volt (constant) should be in your equations (as in the example in the documentation: https://brian2.readthedocs.io/en/stable/user/models.html#threshold-and-reset). Then, you can set exc_neurons.v_thr = -65*mV.

when I was using :exc_neurons.v_reset = -65mV or exc_neurons.v_reset='v=-65mV' , for both , I recieve this error over simulation : Parsing the statement failed: v_reset

I am a bit confused: in addition to the threshold, you also want to set the reset differently for excitatory and inhibitory neurons, right? These lines will not give that error, but I guess you made an error elsewhere? E.g. did you write reset='v_reset' in the NeuronGroup definition, instead of `reset='v = v_reset'?

Rihana Naderi
@RNaderi
I got it. I had a stupid mistake. Thanks for your patience.
Rihana Naderi
@RNaderi
@mstimberg I have challenges with spikegeneratorGroups, would you please take a look at my second issue in this link : https://brian.discourse.group/t/issues-with-spikegeneratorgroup/626/3
wxie2013
@wxie2013

I tried to install Brian2 with the following command:

pip3 install --target=/home/wxie/Brian2/pkgs --upgrade brian2 pytest sphinx docutils mpi4py

when importing brian2, it shows the following errors:

from brian2 import *
/home/wxie/Brian2/pkgs/_distutils_hack/init.py:17: UserWarning: Distutils was imported before Setuptools, but importing Setuptools also replaces the distutils module in sys.modules. This may lead to undesirable behaviors or errors. To avoid these issues, avoid using distutils directly, ensure that setuptools is installed in the traditional way (e.g. not an editable install), and/or make sure that setuptools is always imported before distutils.
warnings.warn(
/home/wxie/Brian2/pkgs/_distutils_hack/init.py:30: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")

any hints are appreciated

Marcel Stimberg
@mstimberg
Hi @wxie2013 . Note that these are warnings, not errors, so if everything works fine apart from these warnings there is nothing to worry about. I haven't seen these warnings in a long time, and I don't remember the exact details, but I think it was a temporary issue in setuptools. Could you check that you are using a recent setuptools version, or simply update it to the latest version?
wxie2013
@wxie2013
Hi @mstimberg , It turns out to be the sequence of import as indicated in the message. import setuptools before brian2 cleaned up the warning message.
Marcel Stimberg
@mstimberg
Yes, that's a workaround. But I checked again, this is a problem in very specific versions of setuptools only (>=v49.2.0 and <v49.3.1), you won't see this warning with an up-to-date setuptools (see https://github.com/brian-team/brian2/issues/1213).
wxie2013
@wxie2013
I see. Thanks for the follow up.
wxie2013
@wxie2013

Hi @Willam_Xavier. This project is about working on the Brian2GeNN interface so that it supports more of GeNN’s features. Except for unforeseen bug fixes, etc., it should not involve changing anything on GeNN’s side. All that said, please note that the application deadline for this year’s Google Summer of Code was on April 19th…

Hi @mstimberg , I thought GeNN doesn't support heterogeneous synapses delays. Is that right GeNN actually does support the feature and one just need to add another interface to bridge Brian2 and GeNN?

Marcel Stimberg
@mstimberg:matrix.org
[m]
Hi @wxie2013: GeNN did not support heterogeneous delays back when Brian2GeNN was created, but it does now. So Brian2GeNN needs changes to support this feature, i.e. translate heterogeneous Brian delays into the format that GeNN works with.
wxie2013
@wxie2013
The message from @neworderofjamie in this thread is a bit confusing:
brian-team/brian2genn#109
Does that mean the delay in GeNN are dendritic instead of axonal? He also mentioned that "they don't necessarily play well with STDP as back-propogating postsynaptic spikes can't be delayed heterogeneosuly to match". Would you please clarify that a bit?
Rihana Naderi
@RNaderi
Hi there, I have some doubts about usage of "summed" . Consider a simple example : syn=Synapse(G,G, 'v_post=v_pre :1 (summed) , syn.i=[0,0,1] , syn.j=[1,2,2] . 1)Here, is v for neuron 2 the sum of v for neuron 0 and v for neuron 1 ? 2) If I remove "summed", how v_2 is interpreted? Thank in advance.
Marcel Stimberg
@mstimberg

The message from @neworderofjamie in this thread is a bit confusing:
brian-team/brian2genn#109
Does that mean the delay in GeNN are dendritic instead of axonal? He also mentioned that "they don't necessarily play well with STDP as back-propogating postsynaptic spikes can't be delayed heterogeneosuly to match". Would you please clarify that a bit?

I always find "axonal" and "dendritic" delays a bit confusing, since in most models, e.g. something like on_pre='ge_post += w', it does not change anything. In Brian, if you set a delay, it is axonal in the sense that we wait for that time before applying the statement. This also means that if your on_pre refers to some pre-synaptic neuronal variable, this is taken at the time when the spike arrives, i.e. after the delay. You'd have to apply a workaround to describe something where it should store a pre-synaptic variable at the spike time and only apply it after a delay on the post-synaptic side. I'm actually not quite sure what it means in GeNN that the delay is dendritic, since AFAICT the synaptic model with delay does not support any access to neuronal variables, so all this should not matter.
For STDP, things are very different. The effictive time difference between the pre- and post-synaptic spike depends on the difference between the axonal delay (time after which the pre-synaptic spike arrives at the synapse) and the dendritic delay (time after with the post-synaptic spike arrives at the synapse). In Brian, you can set synapses.pre.delay and synapses.post.delay independently, but GeNN does not have such a mechanism.

4 replies
Marcel Stimberg
@mstimberg

Hi there, I have some doubts about usage of "summed" . Consider a simple example : syn=Synapse(G,G, 'v_post=v_pre :1 (summed) , syn.i=[0,0,1] , syn.j=[1,2,2] . 1)Here, is v for neuron 2 the sum of v for neuron 0 and v for neuron 1 ? 2)

Hi. Yes, the summed variable here means that at every time step, you sum over all v_pre values for the neurons connecting to a neuron. So with your connection pattern, neuron 0 does not receive any input, its v will therefore set to 0, neuron 1 receives input from neuron 0 and its v will therefore be set to the same value, and v for neuron 1 is the sum of v for neurons 1 and 2. The situation is a bit odd, though, normally you do not tuse he same variable of the same group on the left- and the right-hand side. Basically, here inevitably everything will be 0.

If I remove "summed", how v_2 is interpreted? Thank in advance.

What is v_2? You cannot remove (summed) in the above equations, just stating v_post = v_pre : 1 is not allowed/meaningful.

Rihana Naderi
@RNaderi
You are right. Could you please take a look at this example : https://brian2.readthedocs.io/en/stable/examples/frompapers.Stimberg_et_al_2018.example_4_synrel.html My question is if I remove "summed" here, how Y_S for each post synaptic neuron is interpreted.
Marcel Stimberg
@mstimberg
You cannot remove summed in that example either. It says Y_S_post = Y_S_pre, but in general there is more than one pre-synaptic Y_S for each post-synaptic Y_S, since more than one neuron on the pre-synaptic side can be connected to a post-synaptic neuron (in that example, it is synapses and astrocytes, but the same reasoning applies). So Y_S_post = Y_S_pre does not make sense. With (summed) it is interpreted as Y_S_post = sum(Y_S_pre), where the sum is over all the pre-synaptic elements connecting to the post-synaptic element. This is defined regardless of how many connections there are.