Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 11:52
    kanav99 commented #603
  • 11:51
    kanav99 synchronize #603
  • 11:51

    kanav99 on iip

    Apply suggestions from code rev… (compare)

  • 10:54

    christopher-dG on fc21ba29-done

    (compare)

  • 10:54

    christopher-dG on fc21ba29-done

    https://gitlab.com/JuliaGPU/Dif… (compare)

  • 10:54

    christopher-dG on 7c98348a-done

    (compare)

  • 10:53

    christopher-dG on 7c98348a-done

    https://gitlab.com/JuliaGPU/Dif… (compare)

  • 08:54
    kanav99 synchronize #603
  • 08:54

    kanav99 on iip

    fix Test configuration (compare)

  • 08:51
    kanav99 synchronize #603
  • 08:51

    kanav99 on iip

    use GitHub Actions (compare)

  • 08:31
    kanav99 opened #603
  • 08:31

    kanav99 on iip

    Revert "Non-allocating isinplac… (compare)

  • 08:27
    ChrisRackauckas commented #601
  • 08:17
    kanav99 commented #601
  • 08:00
    ChrisRackauckas commented #602
  • 07:50
    kanav99 commented #602
  • 07:48
    kanav99 synchronize #602
  • 07:48

    kanav99 on nlsolve

    use Roots.jl for AD (compare)

  • 06:36
    hardikudeshi starred SciML/DiffEqNoiseProcess.jl
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> possibly
abstractingagent
@abstractingagent
@ChrisRackauckas I have been playing with a code base posted by the authors of that paper I linked last time (using the meijer g functions as "activation functions" to perform a kind of symbolic regression with gradient descent) their implementation had a custom approximation of the gradient for the meijer g function because apparently it was unstable - would AD solve this?
Trying to toy around with it before I migrate any of the code to julia
They used python - bleh
The idea here would be instead of using a neural ode -> sindy to explicate, could just use the neural ODE with meijer g activation functions and then do an inverse transform to get back the real functions
abstractingagent
@abstractingagent
Exploits this concept of "Exact learning" using a mellin-transform to operate in the space of functionals, do your learning there, and then inverse transform to get your expression learned
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> > their implementation had a custom approximation of the gradient for the meijer g function because apparently it was unstable - would AD solve this?
Yeah, you'd just make an AD rule with that
shrimpceviche
@shrimpceviche
Hi folks, thanks for the amazing work.
I am very new to Neural SDE, and hope to learn a bit more about how's the SDE Ajoint Sensitivity is formulated in the package. Is it based on David Duvenaud's formulation? ( He gave a talk at IAS, and at 41:48 he presented the SED Adjoint Sensitivity ).
Thank you.
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> Yes, if you choose BacksolveAdjoint that's the method shown there.
shrimpceviche
@shrimpceviche
Thanks
BridgingBot
@GitterIRCbot
[slack] <ashton.bradley> That was a beast alright... and fast! I’ll give it a go this week
shrimpceviche
@shrimpceviche
Let me know how it goes, we can compare note :) @AshtonSBradley
BridgingBot
@GitterIRCbot
[slack] <shashi> @isaacsas would you say that having an efficient occursin(xs::Vector, expr) which tells you if each element of xs appears in expr would help you speed up stuff? I see that this could help for example in ismassaction. What are some other primitives like this that maybe helpful?
abstractingagent
@abstractingagent
@ChrisRackauckas thanks!
Stefano Bettani
@ginkobab
Hello, is somebody here?
Stefano Bettani
@ginkobab

Well I'll just leave this here: I'm trying to implement in DiffEq the update rule for a spiking neural network. For now I've implemented with plain Julia and works decently, but I'd like to speed it up a little:

The function simulate just initialize the variables and then runs step! for 10000 times.

The function step! takes as input:
V: a matrix(neurons x time), where values are potentials of individual neurons at a certain time
sd: a vector(neurons), where values represent how long ago a given neuron spiked
w: a matrix(neurons x neurons) of weights (connectivity matrix)
np, ep are parameters
tottime is the total time of simulation

function simulate(tottime, np, sp, ep)
    w = genweights(np, sp)
        V = zeros(np.N, tottime)
        sd = zeros(np.N) .+ 0.2

    V[:, 1:4] .= np.Vᵣ
    V[126:132,1:4] .= 0.

    for t in 3:tottime-1
        step!(V, sd, t, w, np, ep, tottime)
    end

    V[:, 1:tottime], sd
end


function step!(V, sd, t, w, np, ep, tottime)

    V[:, t+1] = dv(view(V, :, t), sd, w, np, ep) # This is the update
    V[view(V,:, t) .> np.Vₜ, t+1] .= 0.0  # This makes the neurons over the threshold spike
    V[view(V,:, t) .== 0.0, t+1] .= np.Vᵣ  # This reset the neurons that spiked to a low potential 

    sd .+= ep.dt  # Here we adjust the spike delays for the next timestep
    sd[view(V,:, t-2) .== 0.0] .= 3*ep.dt  # This sets the spike delays to 3 timesteps for the neurons who spiked 2 timesteps ago, emulating transmission delays or something
    end

end

function dv(v, sd, w, np, ep)
    k = sd .* exp.(-sd ./ np.τₛ) # This is just to save computations
    v.+ ((Eₗₜ .- v ./ np.τₘ ).- (v.-np.Eₑ) .*( w[2] * k) .- (v.-np.Eᵢ) .*( w[1] * k)) .* ep.dt  # This is the differential equation, evaluated with Euler
end

The function step! runs 10000 times, simulating 10 seconds of activity, and it takes ~350ms
I'd like to implement this in DiffEq, but I don't understand how to write a function in the way DiffEq wants it and having it computing the spikes.

Specifically I'm confused by:

  • How to keep track of the timestep inside the function
  • How I should define the matrix
  • What will solve do to my matrix: should I multiply by dt, or will solve do it?

I came up with this, I don't know if it makes any sense. I think there is a main problem in updating sd

# This function initializes all the needed arrays
function DEsimulate(tottime, np, sp, ep)
    alg = FunctionMap{true}()
    w = genweights(np, sp)
    v = zeros(np.N) .+ np.Vᵣ
    sd = zeros(np.N) .+ 0.2

    v[1:5] .= 0.

    p = (w, np, ep)
    prob = DiscreteProblem(update!, [v, sd], (ep.dt, tottime*ep.dt), p)
    sim = solve(prob, alg, dt=ep.dt)
    return sim

end

function update!(dv, u, p, t)
    w, np, ep = p

    v, sd = u
    v[v .== 0.0] .= np.Vᵣ
    v[v .> np.Vₜ] .= 0.0

    k = sd .* exp.(-sd ./ np.τₛ)
    dv[1] += ((Eₗₜ .- v ./ np.τₘ ).- (v.-np.Eₑ) .*( w[2] * k) .- (v.-np.Eᵢ) .*( w[1] * k))
    dv[2][v .== 0.0] .= 2*ep.dt
    dv[2] .+= 1
end

Any kind of hint, also on style or everything else is very well appreciated!

Stefano Bettani
@ginkobab
P.S. I don't care about error right now, I only want performance
Christopher Rackauckas
@ChrisRackauckas
@ginkobab Graham responded on the Slack but it looks like it didn't transfer over
Maybe you want to do something like this? https://tutorials.sciml.ai/html/advanced/01-beeler_reuter.html
BridgingBot
@GitterIRCbot
[slack] <oxinabox> I have rebooted it.
It should be back up shortly
Christopher Rackauckas
@ChrisRackauckas
great!
BridgingBot
@GitterIRCbot
[slack] <louisponet> Hi All, anyone else having an error when trying to do any solving of diffeq?
[slack] <louisponet> This happened at the solve step of the very first example
BridgingBot
@GitterIRCbot
[slack] <Donald Lee> Hi, is anyone familiar with what can cause this error?
LoadError: ArgumentError: invalid index: nothing of type Nothing
Full error in reply.
This message was deleted
[slack] <louisponet> probably you've tried to use the result of a findfirst or similar as the index into an array, but it returned nothing
BridgingBot
@GitterIRCbot
[slack] <Donald Lee> The error is coming from running solve . This line in the model function is in red:
du.GLC_c = vGLCc-1/p.rce*vGLCce-1/p.rcg*vGLCcg
[slack] <Donald Lee> vGLCc and so forth are algebraic equations written within the model function. Anything stand out as to why this error could've occurred?
[slack] <Donald Lee> I'm using LabelledArrays.jl , hence the du. and p. syntax.
BridgingBot
@GitterIRCbot
[slack] <louisponet> From looking at the code of that LabelledArrays.jl function, there is a findfirst that tries to find the symbol you are calling
[slack] <louisponet> could it be that you are trying to use a symbol that isn't actually in the LabelledArray?
[slack] <louisponet> that's the line I'm referring to
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> @louisponet update your packages and get DataStructures v0.18
BridgingBot
@GitterIRCbot
[slack] <louisponet> Weird, I did that, normal up actually downgrades some packages back, e.g. DataStructures to v0.17. But then if I up each of the installed packages in the Project separately they all upgrade to the correct ones
[slack] <louisponet> That didn't work actually
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> what do you mean?
BridgingBot
@GitterIRCbot
[slack] <louisponet> Okay, it was a sysimage issue
[slack] <louisponet> I think the system image I was using compiled in a lower version of DataStructures somewhere causing the error and also the weird updating behavior
[slack] <louisponet> sorry for the noise
Stefano Bettani
@ginkobab
@ChrisRackauckas Thanks for you answer, the tutorial looks very useful, I will look into that. How can I see Graham answer? I can't find any slack channel
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> it's #diffeq-bridged on the slack
Stefano Bettani
@ginkobab
Coll thanks
Oh I'd need an invitation for that
Could you possibly copy paste his answer please?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> invitation is simple
[slack] <chrisrackauckas> http://slackinvite.julialang.org
[slack] <chrisrackauckas> but the message is