Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 13:55

    kanav99 on gh-pages

    build based on 1428da8 (compare)

  • 13:39

    KirillZubov on master

    replace gpu tests update fix some thing and 3 more (compare)

  • 13:39
    KirillZubov closed #240
  • 10:31
    TorkelE commented #290
  • 07:09
    yewalenikhil65 commented #292
  • 05:58
    yewalenikhil65 commented #292
  • 04:22
    ChrisRackauckas closed #368
  • 04:22
    ChrisRackauckas commented #368
  • 04:21
    ChrisRackauckas opened #385
  • 04:19
    ChrisRackauckas opened #384
  • 04:17
    ChrisRackauckas commented #368
  • 03:26
    ChrisRackauckas commented #719
  • 01:19
    isaacsas commented #290
  • 01:18
    isaacsas commented #290
  • 01:01

    christopher-dG on 173ec862-done

    (compare)

  • 01:01

    christopher-dG on 173ec862-done

    https://gitlab.com/JuliaGPU/Dif… (compare)

  • 01:01

    christopher-dG on 33306ba4-done

    (compare)

  • 01:01

    christopher-dG on 33306ba4-done

    https://gitlab.com/JuliaGPU/Dif… (compare)

  • 00:43
  • Jan 22 22:27
    anandijain opened #719
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> You need to use a DAE mass matrix compatible solver
[slack] <chrisrackauckas> note that some may have difficulties though: a discontinuity in the derivative could introduce a zeroth order discontinuity in a differential-algebraic delay differential equation
[slack] <chrisrackauckas> I'm not sure of a good use case for an MArray here.
[slack] <Mattias Fält> Most of them seem to abort if wrapped in MethodOfSteps() and the wrong answer if they are not
[slack] <Ramiro Vignolo> yes, I feel the same way. Thank you!
[slack] <Mattias Fält> I am not sure if it because of the solver or a bug. I can open an issue if it is a bug. I tried the following toy example
x'(t) = -x(t) + d(t-1) d(t) = 0.5*x(t)
and implemented it as a DDEProblem with mass_matrix=[1.0 0;0 0] and function
du[1] = -u[1] + h[1] du[2] = -u[2] +0.5*u[1]
Initial state x0=[1.0,0] and history
h!(out, p, t) = (out .= x0)
The first state is correct for the first second (e^-t), but the second state is not x(t)/2 during the first second. I didn't have the time to calculate the analytical result for t>1.
The code is available here
https://gist.github.com/mfalt/d6ac31b192a8ccd96acb0057915dffc2
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> what is it saying when it's aborting?
[slack] <Mattias Fält> The only method I can find that is able to handle the discontinuity in d is MethodOfSteps(ImplicitEuler())
[slack] <chrisrackauckas> yeah that makes sense
[slack] <chrisrackauckas> we're releasing proprietary solvers soon that will handle that better as well
[slack] <chrisrackauckas> the issue is that higher order needs to do some special things
[slack] <Mattias Fält> Either it ERRORs on MethodError, or dt <= dtmin. Aborting
[slack] <Mattias Fält> It seems that there should be a much simpler approach to solve the DDE problems when the form as as above, without resorting to the DAE approach
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> but what you're specifying is a DAE
[slack] <chrisrackauckas> oh wait, it's a time lagged d only
[slack] <chrisrackauckas> hmm
[slack] <Mattias Fält> Yeah, it is only lagged in a variable that is explicitly defined by the states (and lag)
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> yeah tricky
[slack] <Mattias Fält> If it is possible to save the variable d, for example using SaveCallback, and then access it in the solver, as is done with the history function, then the problem would be solved
[slack] <chrisrackauckas> yeah you can do them
[slack] <chrisrackauckas> that
[slack] <chrisrackauckas> but then interpolating is hard
BridgingBot
@GitterIRCbot
[slack] <Mattias Fält> Interpolating is harder for that variable than the ones in the history function? I guess you don't have the derivative, and possibly discontinuities. I'm not sure how the standard history function works though, I don't think it's documented?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> how it works is a detail of the package
[slack] <chrisrackauckas> you can ask if you're curious though
BridgingBot
@GitterIRCbot
[slack] <Mattias Fält> Sure, I'll look through the code first and see if I have any questions
BridgingBot
@GitterIRCbot
[slack] <Donald Lee> @chrisrackauckas I switched p to LArray, but still getting the same error.
BridgingBot
@GitterIRCbot
[slack] <Mattias Fält> I implemented the DAE approach, but the results are quite bad, I added a reference image in the PR here
JuliaControl/ControlSystems.jl#372
BridgingBot
@GitterIRCbot

[slack] <Ramiro Vignolo> Hi guys. I have a struct that contains a ODEProblem inside (among other things) and has a call method that remakes the problem by changing its tspan (this is an ODE problem with TERMINAL, not INITIAL, condition) and solves it.
```struct ZCB{P}
prob::P
end

function (zcb::ZCB)(T::Real)
prob = zcb.prob
new_prob = remake(prob, tspan = (T, zero(T)))
sol = sol(new_prob)

now I have to use the solution to compute a value

...

endSo far, so good. However, I would like to store the solution for a given `T` in a cache, because if the user performs a new call using the same `T` as before, I would like to avoid solving the ODEProblem. In this context, should I define the struct to be something like:mutable struct ZCB{P,C}
prob::P
cache::C
end`` How can I construct an instance of ZCB? Should I perform asolvein its constructor? On the other hand, should I use a dictionary as acachesuch asDict{Real,SolutionObject}so I can search the solution for a specificT` as its keyword?

BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> you can use Core.return_type
BridgingBot
@GitterIRCbot
[slack] <Ramiro Vignolo> does it makes sense to build a cache as I suggested?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> possibly
abstractingagent
@abstractingagent
@ChrisRackauckas I have been playing with a code base posted by the authors of that paper I linked last time (using the meijer g functions as "activation functions" to perform a kind of symbolic regression with gradient descent) their implementation had a custom approximation of the gradient for the meijer g function because apparently it was unstable - would AD solve this?
Trying to toy around with it before I migrate any of the code to julia
They used python - bleh
The idea here would be instead of using a neural ode -> sindy to explicate, could just use the neural ODE with meijer g activation functions and then do an inverse transform to get back the real functions
abstractingagent
@abstractingagent
Exploits this concept of "Exact learning" using a mellin-transform to operate in the space of functionals, do your learning there, and then inverse transform to get your expression learned
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> > their implementation had a custom approximation of the gradient for the meijer g function because apparently it was unstable - would AD solve this?
Yeah, you'd just make an AD rule with that
shrimpceviche
@shrimpceviche
Hi folks, thanks for the amazing work.
I am very new to Neural SDE, and hope to learn a bit more about how's the SDE Ajoint Sensitivity is formulated in the package. Is it based on David Duvenaud's formulation? ( He gave a talk at IAS, and at 41:48 he presented the SED Adjoint Sensitivity ).
Thank you.
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> Yes, if you choose BacksolveAdjoint that's the method shown there.
shrimpceviche
@shrimpceviche
Thanks
BridgingBot
@GitterIRCbot
[slack] <ashton.bradley> That was a beast alright... and fast! I’ll give it a go this week
shrimpceviche
@shrimpceviche
Let me know how it goes, we can compare note :) @AshtonSBradley
BridgingBot
@GitterIRCbot
[slack] <shashi> @isaacsas would you say that having an efficient occursin(xs::Vector, expr) which tells you if each element of xs appears in expr would help you speed up stuff? I see that this could help for example in ismassaction. What are some other primitives like this that maybe helpful?
abstractingagent
@abstractingagent
@ChrisRackauckas thanks!
Stefano Bettani
@ginkobab
Hello, is somebody here?
Stefano Bettani
@ginkobab

Well I'll just leave this here: I'm trying to implement in DiffEq the update rule for a spiking neural network. For now I've implemented with plain Julia and works decently, but I'd like to speed it up a little:

The function simulate just initialize the variables and then runs step! for 10000 times.

The function step! takes as input:
V: a matrix(neurons x time), where values are potentials of individual neurons at a certain time
sd: a vector(neurons), where values represent how long ago a given neuron spiked
w: a matrix(neurons x neurons) of weights (connectivity matrix)
np, ep are parameters
tottime is the total time of simulation

function simulate(tottime, np, sp, ep)
    w = genweights(np, sp)
        V = zeros(np.N, tottime)
        sd = zeros(np.N) .+ 0.2

    V[:, 1:4] .= np.Vᵣ
    V[126:132,1:4] .= 0.

    for t in 3:tottime-1
        step!(V, sd, t, w, np, ep, tottime)
    end

    V[:, 1:tottime], sd
end


function step!(V, sd, t, w, np, ep, tottime)

    V[:, t+1] = dv(view(V, :, t), sd, w, np, ep) # This is the update
    V[view(V,:, t) .> np.Vₜ, t+1] .= 0.0  # This makes the neurons over the threshold spike
    V[view(V,:, t) .== 0.0, t+1] .= np.Vᵣ  # This reset the neurons that spiked to a low potential 

    sd .+= ep.dt  # Here we adjust the spike delays for the next timestep
    sd[view(V,:, t-2) .== 0.0] .= 3*ep.dt  # This sets the spike delays to 3 timesteps for the neurons who spiked 2 timesteps ago, emulating transmission delays or something
    end

end

function dv(v, sd, w, np, ep)
    k = sd .* exp.(-sd ./ np.τₛ) # This is just to save computations
    v.+ ((Eₗₜ .- v ./ np.τₘ ).- (v.-np.Eₑ) .*( w[2] * k) .- (v.-np.Eᵢ) .*( w[1] * k)) .* ep.dt  # This is the differential equation, evaluated with Euler
end

The function step! runs 10000 times, simulating 10 seconds of activity, and it takes ~350ms
I'd like to implement this in DiffEq, but I don't understand how to write a function in the way DiffEq wants it and having it computing the spikes.

Specifically I'm confused by:

  • How to keep track of the timestep inside the function
  • How I should define the matrix
  • What will solve do to my matrix: should I multiply by dt, or will solve do it?

I came up with this, I don't know if it makes any sense. I think there is a main problem in updating sd

# This function initializes all the needed arrays
function DEsimulate(tottime, np, sp, ep)
    alg = FunctionMap{true}()
    w = genweights(np, sp)
    v = zeros(np.N) .+ np.Vᵣ
    sd = zeros(np.N) .+ 0.2

    v[1:5] .= 0.

    p = (w, np, ep)
    prob = DiscreteProblem(update!, [v, sd], (ep.dt, tottime*ep.dt), p)
    sim = solve(prob, alg, dt=ep.dt)
    return sim

end

function update!(dv, u, p, t)
    w, np, ep = p

    v, sd = u
    v[v .== 0.0] .= np.Vᵣ
    v[v .> np.Vₜ] .= 0.0

    k = sd .* exp.(-sd ./ np.τₛ)
    dv[1] += ((Eₗₜ .- v ./ np.τₘ ).- (v.-np.Eₑ) .*( w[2] * k) .- (v.-np.Eᵢ) .*( w[1] * k))
    dv[2][v .== 0.0] .= 2*ep.dt
    dv[2] .+= 1
end

Any kind of hint, also on style or everything else is very well appreciated!

Stefano Bettani
@ginkobab
P.S. I don't care about error right now, I only want performance
Christopher Rackauckas
@ChrisRackauckas
@ginkobab Graham responded on the Slack but it looks like it didn't transfer over