Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> but then interpolating is hard
BridgingBot
@GitterIRCbot
[slack] <Mattias Fält> Interpolating is harder for that variable than the ones in the history function? I guess you don't have the derivative, and possibly discontinuities. I'm not sure how the standard history function works though, I don't think it's documented?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> how it works is a detail of the package
[slack] <chrisrackauckas> you can ask if you're curious though
BridgingBot
@GitterIRCbot
[slack] <Mattias Fält> Sure, I'll look through the code first and see if I have any questions
BridgingBot
@GitterIRCbot
[slack] <Donald Lee> @chrisrackauckas I switched p to LArray, but still getting the same error.
BridgingBot
@GitterIRCbot
[slack] <Mattias Fält> I implemented the DAE approach, but the results are quite bad, I added a reference image in the PR here
JuliaControl/ControlSystems.jl#372
BridgingBot
@GitterIRCbot

[slack] <Ramiro Vignolo> Hi guys. I have a struct that contains a ODEProblem inside (among other things) and has a call method that remakes the problem by changing its tspan (this is an ODE problem with TERMINAL, not INITIAL, condition) and solves it.
```struct ZCB{P}
prob::P
end

function (zcb::ZCB)(T::Real)
prob = zcb.prob
new_prob = remake(prob, tspan = (T, zero(T)))
sol = sol(new_prob)

now I have to use the solution to compute a value

...

endSo far, so good. However, I would like to store the solution for a given `T` in a cache, because if the user performs a new call using the same `T` as before, I would like to avoid solving the ODEProblem. In this context, should I define the struct to be something like:mutable struct ZCB{P,C}
prob::P
cache::C
end`` How can I construct an instance of ZCB? Should I perform asolvein its constructor? On the other hand, should I use a dictionary as acachesuch asDict{Real,SolutionObject}so I can search the solution for a specificT` as its keyword?

BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> you can use Core.return_type
BridgingBot
@GitterIRCbot
[slack] <Ramiro Vignolo> does it makes sense to build a cache as I suggested?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> possibly
abstractingagent
@abstractingagent
@ChrisRackauckas I have been playing with a code base posted by the authors of that paper I linked last time (using the meijer g functions as "activation functions" to perform a kind of symbolic regression with gradient descent) their implementation had a custom approximation of the gradient for the meijer g function because apparently it was unstable - would AD solve this?
Trying to toy around with it before I migrate any of the code to julia
They used python - bleh
The idea here would be instead of using a neural ode -> sindy to explicate, could just use the neural ODE with meijer g activation functions and then do an inverse transform to get back the real functions
abstractingagent
@abstractingagent
Exploits this concept of "Exact learning" using a mellin-transform to operate in the space of functionals, do your learning there, and then inverse transform to get your expression learned
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> > their implementation had a custom approximation of the gradient for the meijer g function because apparently it was unstable - would AD solve this?
Yeah, you'd just make an AD rule with that
shrimpceviche
@shrimpceviche
Hi folks, thanks for the amazing work.
I am very new to Neural SDE, and hope to learn a bit more about how's the SDE Ajoint Sensitivity is formulated in the package. Is it based on David Duvenaud's formulation? ( He gave a talk at IAS, and at 41:48 he presented the SED Adjoint Sensitivity ).
Thank you.
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> Yes, if you choose BacksolveAdjoint that's the method shown there.
shrimpceviche
@shrimpceviche
Thanks
BridgingBot
@GitterIRCbot
[slack] <ashton.bradley> That was a beast alright... and fast! I’ll give it a go this week
shrimpceviche
@shrimpceviche
Let me know how it goes, we can compare note :) @AshtonSBradley
BridgingBot
@GitterIRCbot
[slack] <shashi> @isaacsas would you say that having an efficient occursin(xs::Vector, expr) which tells you if each element of xs appears in expr would help you speed up stuff? I see that this could help for example in ismassaction. What are some other primitives like this that maybe helpful?
abstractingagent
@abstractingagent
@ChrisRackauckas thanks!
Stefano Bettani
@ginkobab
Hello, is somebody here?
Stefano Bettani
@ginkobab

Well I'll just leave this here: I'm trying to implement in DiffEq the update rule for a spiking neural network. For now I've implemented with plain Julia and works decently, but I'd like to speed it up a little:

The function simulate just initialize the variables and then runs step! for 10000 times.

The function step! takes as input:
V: a matrix(neurons x time), where values are potentials of individual neurons at a certain time
sd: a vector(neurons), where values represent how long ago a given neuron spiked
w: a matrix(neurons x neurons) of weights (connectivity matrix)
np, ep are parameters
tottime is the total time of simulation

function simulate(tottime, np, sp, ep)
    w = genweights(np, sp)
        V = zeros(np.N, tottime)
        sd = zeros(np.N) .+ 0.2

    V[:, 1:4] .= np.Vᵣ
    V[126:132,1:4] .= 0.

    for t in 3:tottime-1
        step!(V, sd, t, w, np, ep, tottime)
    end

    V[:, 1:tottime], sd
end


function step!(V, sd, t, w, np, ep, tottime)

    V[:, t+1] = dv(view(V, :, t), sd, w, np, ep) # This is the update
    V[view(V,:, t) .> np.Vₜ, t+1] .= 0.0  # This makes the neurons over the threshold spike
    V[view(V,:, t) .== 0.0, t+1] .= np.Vᵣ  # This reset the neurons that spiked to a low potential 

    sd .+= ep.dt  # Here we adjust the spike delays for the next timestep
    sd[view(V,:, t-2) .== 0.0] .= 3*ep.dt  # This sets the spike delays to 3 timesteps for the neurons who spiked 2 timesteps ago, emulating transmission delays or something
    end

end

function dv(v, sd, w, np, ep)
    k = sd .* exp.(-sd ./ np.τₛ) # This is just to save computations
    v.+ ((Eₗₜ .- v ./ np.τₘ ).- (v.-np.Eₑ) .*( w[2] * k) .- (v.-np.Eᵢ) .*( w[1] * k)) .* ep.dt  # This is the differential equation, evaluated with Euler
end

The function step! runs 10000 times, simulating 10 seconds of activity, and it takes ~350ms
I'd like to implement this in DiffEq, but I don't understand how to write a function in the way DiffEq wants it and having it computing the spikes.

Specifically I'm confused by:

  • How to keep track of the timestep inside the function
  • How I should define the matrix
  • What will solve do to my matrix: should I multiply by dt, or will solve do it?

I came up with this, I don't know if it makes any sense. I think there is a main problem in updating sd

# This function initializes all the needed arrays
function DEsimulate(tottime, np, sp, ep)
    alg = FunctionMap{true}()
    w = genweights(np, sp)
    v = zeros(np.N) .+ np.Vᵣ
    sd = zeros(np.N) .+ 0.2

    v[1:5] .= 0.

    p = (w, np, ep)
    prob = DiscreteProblem(update!, [v, sd], (ep.dt, tottime*ep.dt), p)
    sim = solve(prob, alg, dt=ep.dt)
    return sim

end

function update!(dv, u, p, t)
    w, np, ep = p

    v, sd = u
    v[v .== 0.0] .= np.Vᵣ
    v[v .> np.Vₜ] .= 0.0

    k = sd .* exp.(-sd ./ np.τₛ)
    dv[1] += ((Eₗₜ .- v ./ np.τₘ ).- (v.-np.Eₑ) .*( w[2] * k) .- (v.-np.Eᵢ) .*( w[1] * k))
    dv[2][v .== 0.0] .= 2*ep.dt
    dv[2] .+= 1
end

Any kind of hint, also on style or everything else is very well appreciated!

Stefano Bettani
@ginkobab
P.S. I don't care about error right now, I only want performance
Christopher Rackauckas
@ChrisRackauckas
@ginkobab Graham responded on the Slack but it looks like it didn't transfer over
Maybe you want to do something like this? https://tutorials.sciml.ai/html/advanced/01-beeler_reuter.html
BridgingBot
@GitterIRCbot
[slack] <oxinabox> I have rebooted it.
It should be back up shortly
Christopher Rackauckas
@ChrisRackauckas
great!
BridgingBot
@GitterIRCbot
[slack] <louisponet> Hi All, anyone else having an error when trying to do any solving of diffeq?
[slack] <louisponet> This happened at the solve step of the very first example
BridgingBot
@GitterIRCbot
[slack] <Donald Lee> Hi, is anyone familiar with what can cause this error?
LoadError: ArgumentError: invalid index: nothing of type Nothing
Full error in reply.
This message was deleted
[slack] <louisponet> probably you've tried to use the result of a findfirst or similar as the index into an array, but it returned nothing
BridgingBot
@GitterIRCbot
[slack] <Donald Lee> The error is coming from running solve . This line in the model function is in red:
du.GLC_c = vGLCc-1/p.rce*vGLCce-1/p.rcg*vGLCcg
[slack] <Donald Lee> vGLCc and so forth are algebraic equations written within the model function. Anything stand out as to why this error could've occurred?
[slack] <Donald Lee> I'm using LabelledArrays.jl , hence the du. and p. syntax.
BridgingBot
@GitterIRCbot
[slack] <louisponet> From looking at the code of that LabelledArrays.jl function, there is a findfirst that tries to find the symbol you are calling
[slack] <louisponet> could it be that you are trying to use a symbol that isn't actually in the LabelledArray?
[slack] <louisponet> that's the line I'm referring to
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> @louisponet update your packages and get DataStructures v0.18
BridgingBot
@GitterIRCbot
[slack] <louisponet> Weird, I did that, normal up actually downgrades some packages back, e.g. DataStructures to v0.17. But then if I up each of the installed packages in the Project separately they all upgrade to the correct ones
[slack] <louisponet> That didn't work actually
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> what do you mean?
BridgingBot
@GitterIRCbot
[slack] <louisponet> Okay, it was a sysimage issue