github-actions[bot] on new_version
CompatHelper: bump compat for "… (compare)
github-actions[bot] on new_version
CompatHelper: bump compat for "… (compare)
christopher-dG on e2d6ebe3-done
christopher-dG on e2d6ebe3-done
https://gitlab.com/JuliaGPU/Dif… (compare)
github-actions[bot] on new_version
CompatHelper: bump compat for "… (compare)
christopher-dG on 97c99f0a-done
christopher-dG on 97c99f0a-done
https://gitlab.com/JuliaGPU/Dif… (compare)
christopher-dG on 97c99f0a
Rebuild model_inference/01-pend… (compare)
github-actions[bot] on new_version
CompatHelper: bump compat for "… (compare)
github-actions[bot] on new_version
CompatHelper: bump compat for "… (compare)
Well I'll just leave this here: I'm trying to implement in DiffEq the update rule for a spiking neural network. For now I've implemented with plain Julia and works decently, but I'd like to speed it up a little:
The function simulate
just initialize the variables and then runs step! for 10000 times.
The function step!
takes as input:
V: a matrix(neurons x time), where values are potentials of individual neurons at a certain time
sd: a vector(neurons), where values represent how long ago a given neuron spiked
w: a matrix(neurons x neurons) of weights (connectivity matrix)
np, ep are parameters
tottime is the total time of simulation
function simulate(tottime, np, sp, ep)
w = genweights(np, sp)
V = zeros(np.N, tottime)
sd = zeros(np.N) .+ 0.2
V[:, 1:4] .= np.Vᵣ
V[126:132,1:4] .= 0.
for t in 3:tottime-1
step!(V, sd, t, w, np, ep, tottime)
end
V[:, 1:tottime], sd
end
function step!(V, sd, t, w, np, ep, tottime)
V[:, t+1] = dv(view(V, :, t), sd, w, np, ep) # This is the update
V[view(V,:, t) .> np.Vₜ, t+1] .= 0.0 # This makes the neurons over the threshold spike
V[view(V,:, t) .== 0.0, t+1] .= np.Vᵣ # This reset the neurons that spiked to a low potential
sd .+= ep.dt # Here we adjust the spike delays for the next timestep
sd[view(V,:, t-2) .== 0.0] .= 3*ep.dt # This sets the spike delays to 3 timesteps for the neurons who spiked 2 timesteps ago, emulating transmission delays or something
end
end
function dv(v, sd, w, np, ep)
k = sd .* exp.(-sd ./ np.τₛ) # This is just to save computations
v.+ ((Eₗₜ .- v ./ np.τₘ ).- (v.-np.Eₑ) .*( w[2] * k) .- (v.-np.Eᵢ) .*( w[1] * k)) .* ep.dt # This is the differential equation, evaluated with Euler
end
The function step!
runs 10000 times, simulating 10 seconds of activity, and it takes ~350ms
I'd like to implement this in DiffEq, but I don't understand how to write a function in the way DiffEq wants it and having it computing the spikes.
Specifically I'm confused by:
solve
do to my matrix: should I multiply by dt, or will solve
do it?I came up with this, I don't know if it makes any sense. I think there is a main problem in updating sd
# This function initializes all the needed arrays
function DEsimulate(tottime, np, sp, ep)
alg = FunctionMap{true}()
w = genweights(np, sp)
v = zeros(np.N) .+ np.Vᵣ
sd = zeros(np.N) .+ 0.2
v[1:5] .= 0.
p = (w, np, ep)
prob = DiscreteProblem(update!, [v, sd], (ep.dt, tottime*ep.dt), p)
sim = solve(prob, alg, dt=ep.dt)
return sim
end
function update!(dv, u, p, t)
w, np, ep = p
v, sd = u
v[v .== 0.0] .= np.Vᵣ
v[v .> np.Vₜ] .= 0.0
k = sd .* exp.(-sd ./ np.τₛ)
dv[1] += ((Eₗₜ .- v ./ np.τₘ ).- (v.-np.Eₑ) .*( w[2] * k) .- (v.-np.Eᵢ) .*( w[1] * k))
dv[2][v .== 0.0] .= 2*ep.dt
dv[2] .+= 1
end
Any kind of hint, also on style or everything else is very well appreciated!
solve
step of the very first example
findfirst
or similar as the index into an array, but it returned nothing
vGLCc
and so forth are algebraic equations written within the model
function. Anything stand out as to why this error could've occurred?
LabelledArrays.jl
, hence the du.
and p.
syntax.
[slack] <chrisrackauckas> Graham Smith [1:19 PM]
ginkobab: your message got clipped going from gitter to slack, so I can't see your implementation starting around update!
. But I've thought some about how I would implement a spiking network, so I'll try to answer your questions:
t
V
, your membrane potentials? It's just the vector as long as np.N
. You don't define the np.N
x timesteps
matrix; that comes out of solve
.dt
(assuming I'm right about what you meant by the matrix); solve
will take care of that. Eventually you may even want to use a more advanced solver, which would do way more than multiplying by dt
V[view(V,:, t) .> np.Vₜ, t+1] .= 0.0 # This makes the neurons over the threshold spike
V[view(V,:, t) .== 0.0, t+1] .= np.Vᵣ # This reset the neurons that spiked to a low potential
V
will have a marker for spiking timesteps, and you won't clobber it on the same step (as these lines do now, with 0.0 as the marker)Graham Smith [2:00 PM]
Oooooo wait you can probably use callbacks to handle spike thresholding
[2:00 PM] https://diffeq.sciml.ai/v2.0/features/callback_functions.html
Chris Rackauckas [3:08 PM]
you probably want to link to recent docs. that's v2.0
[slack] <ginkobab> Thanks for your answer!
Just to clarify, the weird block about updating sets the potential of the subsequent timestep equal to Vr or to 0, so the order is important, otherwise since 0 > threshold, the reset wouldn't work. So effectively it marks a spike that we can then extract!
I'm gonna look into callbacks now, thanks again!
Hi all. Is there a way to deal with this problem in Julia?
function explicit(y,p,t)
sqrt(1-y[1]^2)
end
tspan = (0.0,π)
y0 = 0.0
problem = ODEProblem(explicit,y0,tspan)
sol = solve(problem, Rosenbrock23())
In MATLAB, Cleve Moler solve this problem by add a bound like this:
f = @(t,y) sqrt(1-min(y,1)^2).
This could also be used in Python with Scipy.odeint.
def f(t,y): return np.sqrt(1-np.min(y,1)**2)
But Julia don't allow this.