Brian2 is an improved and partly rewritten version of Brian, the spiking neural network simulator (see http://briansimulator.org). For the documentation, see http://brian2.readthedocs.org
mstimberg on function_compiler_kwds
Add .c and .pxd to the list of … Make Cython compilation quiet a… (compare)
mstimberg on function_compiler_kwds
Document the new features for u… (compare)
mstimberg on function_compiler_kwds
Include external source files a… (compare)
mstimberg on function_compiler_kwds
Minor bug-fixes for new additio… Support external source files f… Add tests for functions with ex… (compare)
Where is the snapshot stored when I use "store" after simulation? I'd like to reuse my network without long simulation.
If you use a simple
store()
, the snapshot is stored in memory, i.e. no longer available when the Python process ends. You can store things to disk using thefilename
argument, but note that this is not a general mechanism to store a model to disk, as explained in the documentation: https://brian2.readthedocs.io/en/stable/reference/brian2.core.network.Network.html#brian2.core.network.Network.store
Got it. Thank you very much for all information and help
TimedArray
, but you can define a new TimedArray
for each run. If it has the same name, all references to that name will pick up the new values.G.I = 'I_recorded(t)'
is very likely not doing what you want: this will set the initial value of G.I
to the value of I_recorded(t)
at that time, i.e. probably at t=0*ms
. This means that the value will not change during the simulation. Instead, your equations need to include something like I = I_recorded(t) : amp
to state the current gets its values from the TimedArray
.
Hi @RNaderi. You cannot change the values in a
TimedArray
, but you can define a newTimedArray
for each run. If it has the same name, all references to that name will pick up the new values.
But note thatG.I = 'I_recorded(t)'
is very likely not doing what you want: this will set the initial value ofG.I
to the value ofI_recorded(t)
at that time, i.e. probably att=0*ms
. This means that the value will not change during the simulation. Instead, your equations need to include something likeI = I_recorded(t) : amp
to state the current gets its values from theTimedArray
.
I also did it , while it will run on the last value of my oId I_recorded(t) : "0" ( I_recorded(t) = TimedArray([1,1,0], dt=1ms) ) , and can't see my new input (I_recorded=TimesArray([0,0,1,1,0], dt=1ms) ). I want to run another run for new inputs of different timedArray imidiately after finishing a long run . But it doesn't get new inputs and repeat on the last values of ole I_recorded
TimedArray
entries correspond to the time starting at 0s. If you continue a run, the time will not be at 0s anymore, so it will not use the values at the start of the new TimedArray
. Either create a single TimedArray
with the values for all the inputs one after the other (so values for the first input will be at, say, 0ms, values for the second input start at 3ms, etc.), or use the store/restore
functionality to restart the simulation at 0s.
float
(for a scalar value) or np.array
(for an array). See https://brian2.readthedocs.io/en/stable/user/units.html#removing-units
index
to a random number between 0 and 100 (i.e. up to 99), you can use S.index = 'int(rand()*100)'
, but this will lead to some repeated and some missing values. To not repeat indices, you'll have to use numpy instead of Brian's string expression framework. E.g., S.index = np.choice(100, replace=False, size=100)
.
When I copied-paste the code at the end of the Brian tutorial part 1, there was an error at this line
x = hist(spikemon.t/ms, 100, histtype='stepfilled', facecolor='k', weights=list(ones(len(spikemon))/(N*defaultclock.dt)))
it said "ValueError: weights should have the same shape as x"
For a <class 'brian2.units.fundamentalunits.Quantity'>, what is the best way to strip the unit and just get the number. For example: 9. ms and I just need to get 9.
You can use, float(9*ms)
to get the numerical part, but it returns in SI unit For eg. 0.009
run(...)
statements, then the second one will continue where the previous simulation stopped. This includes all variables such as synaptic weights. Statements like S.w = 'rand()'
are only executed at the point in the code where they are written. They do not get automatically re-executed for a second run.
how about when I import my weights as initialization instead of rand()?
Not sure I understand, you mean when you initialize weights with some concrete values S.w = some_values
? This does not change anything, this assignment is executed only once like any other assignment.
Hi @RNaderi . Would you mind asking this question on https://brian.discourse.group ? It's not a simple answer, and I think others could benefit from it. Thanks!
for sure. I thought it would be short. That was why I asked here. Thank you.
clk.dt_
is not a private attribute (that would be clk._dt
), the ..._
syntax is just the value without the units, and normally setting clk.dt_ = ...
or clk.dt = 0.00001
or clk.dt = 0.001*ms
should do the exact same thing. That value seems to be a bit on the tiny side, though.
Network
object. Something like this should work:# .. define network
p = PoissonInput(...)
net = Network(collect()) # create network with all objects
net.run(...) # run first simulation
net.remove(p) # remove previous PoissonInput
p = PoissonInput(...) # create new PoissonInput
net.add(p) # add the new object to the network
net.run(...) # run new simulation
Network
object, an alternative would be to create all your inputs in the beginning, but only make one of them active at a time. Something along the lines of# ... define network
p1 = PoissonInput(...)
p2 = PoissonInput(...)
p2.active = False # switch off second input
run(...)
# switch from first to second input
p1.active = False
p2.active = True
run(...)
Since I don't have deep understanding of Poisson input , I wanted to ask you if I generate 2 PoissonInput with the same rate(with the same initialization), will the result be the same ? I mean in terms of correlation of input spikes.
Note sure I undertand. The results will not be exactly the same (different random numbers), but the statistics are the same in both cases. The spikes are uncorrelated.
If you remove/add elements to an existing network as in my example, everything else in the network is unaffected. Synaptic weights, state variables, etc. are all unchanged, and the simulation continues where it left off. I did not think of it earlier, but if you want to keep your previous approach without creating a
Network
object, an alternative would be to create all your inputs in the beginning, but only make one of them active at a time. Something along the lines of# ... define network p1 = PoissonInput(...) p2 = PoissonInput(...) p2.active = False # switch off second input run(...) # switch from first to second input p1.active = False p2.active = True run(...)
How interesting. Thank you very much.
Since I don't have deep understanding of Poisson input , I wanted to ask you if I generate 2 PoissonInput with the same rate(with the same initialization), will the result be the same ? I mean in terms of correlation of input spikes.
Note sure I undertand. The results will not be exactly the same (different random numbers), but the statistics are the same in both cases. The spikes are uncorrelated.
Is there any way to monitor poisoninput in the way that be used for PoissonGroup (SpikeMonitor(Poissongroup)) ?
Ah sorry, I did not think about that. In cases like this you have to be more explicit about what the components of your model are, by creating a
Network
object. Something like this should work:# .. define network p = PoissonInput(...) net = Network(collect()) # create network with all objects net.run(...) # run first simulation net.remove(p) # remove previous PoissonInput p = PoissonInput(...) # create new PoissonInput net.add(p) # add the new object to the network net.run(...) # run new simulation
when I am using this way, after the first run I recieve this message : "neurongroup has already been run in the context of another network. Use add/remove to change the objects in a simulated network instead of creating a new one." should I also remove neuronGroup each time?
from brian2 import *
G = NeuronGroup(1, 'dv/dt = -v/(10*ms) : 1')
p1 = PoissonInput(G, 'v', 10, 50*Hz, 0.1)
state_mon = StateMonitor(G, 'v', record=0)
net = Network(collect())
net.run(100*ms)
net.remove(p1)
p2 = PoissonInput(G, 'v', 10, 5*Hz, 0.1)
net.add(p2)
net.run(100*ms)
plt.plot(state_mon.t/ms, state_mon.v[0])
plt.show()
Is there any way to monitor poisoninput in the way that be used for PoissonGroup (SpikeMonitor(Poissongroup)) ?
It is not as straightforward, since it does not generate any individual events/spikes, but instead determines the total number of spikes for each time step (this is much faster if you have several neurons, i.e. N >> 1. If this is not the case, then rather use a PoissonGroup
). If the PoissonInput
is the only thing that updates the target variable (g_e
in your earlier example), then you can use a StateMonitor
to observe that variable and see the effect of PoissonInput
. By comparing the value before and after the update, you get its effect. Here's how to update my earlier example to plot the PoissonInput
contribution:
from brian2 import *
G = NeuronGroup(1, 'dv/dt = -v/(10*ms) : 1')
p1 = PoissonInput(G, 'v', 10, 50*Hz, 0.1)
state_mon = StateMonitor(G, 'v', record=0)
poisson_mon_before = StateMonitor(G, 'v', record=0, when='before_synapses')
poisson_mon_after = StateMonitor(G, 'v', record=0, when='after_synapses')
net = Network(collect())
net.run(100*ms)
net.remove(p1)
p2 = PoissonInput(G, 'v', 10, 5*Hz, 0.1)
net.add(p2)
net.run(100*ms)
fig, (ax_top, ax_bottom) = plt.subplots(2, 1, sharex=True)
ax_top.plot(state_mon.t/ms, state_mon.v[0])
ax_bottom.plot(poisson_mon_before.t/ms, poisson_mon_after.v[0] - poisson_mon_before.v[0])
plt.show()