Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
[slack] <louisponet> I think the system image I was using compiled in a lower version of DataStructures somewhere causing the error and also the weird updating behavior
[slack] <louisponet> sorry for the noise
Stefano Bettani
@ChrisRackauckas Thanks for you answer, the tutorial looks very useful, I will look into that. How can I see Graham answer? I can't find any slack channel
[slack] <chrisrackauckas> it's #diffeq-bridged on the slack
Stefano Bettani
Coll thanks
Oh I'd need an invitation for that
Could you possibly copy paste his answer please?
[slack] <chrisrackauckas> invitation is simple
[slack] <chrisrackauckas> http://slackinvite.julialang.org
[slack] <chrisrackauckas> but the message is

[slack] <chrisrackauckas> Graham Smith [1:19 PM]
ginkobab: your message got clipped going from gitter to slack, so I can't see your implementation starting around update!. But I've thought some about how I would implement a spiking network, so I'll try to answer your questions:

  1. The timestep is given to you through the argument t
  2. I'm not quite sure what matrix you mean, but I think you mean V, your membrane potentials? It's just the vector as long as np.N. You don't define the np.N x timesteps matrix; that comes out of solve.
  3. No need to multiply by dt (assuming I'm right about what you meant by the matrix); solve will take care of that. Eventually you may even want to use a more advanced solver, which would do way more than multiplying by dt
    Graham Smith [1:26 PM]
    None of that was unique to spiking networks, which I'm a little confused about. How is your original implementation tracking spikes?
    V[view(V,:, t) .> np.Vₜ, t+1] .= 0.0 # This makes the neurons over the threshold spike V[view(V,:, t) .== 0.0, t+1] .= np.Vᵣ # This reset the neurons that spiked to a low potential
    This block seems a bit off. Unless I'm missing something, it sets superthreshold potentials to zero, and then immediately to reset potential. What you really want is either to 1) mark in an auxiliary vector that there was a spike and then reset to some reset potential or 2) reverse these two lines and change 0.0 to some unattainable marker for "spike." That way V will have a marker for spiking timesteps, and you won't clobber it on the same step (as these lines do now, with 0.0 as the marker)
    [1:29 PM] Now the problem with all of these approaches is that they aren't differentiable. Which is really what I should have led with, but I was on a single track 😅

Graham Smith [2:00 PM]
Oooooo wait you can probably use callbacks to handle spike thresholding
[2:00 PM] https://diffeq.sciml.ai/v2.0/features/callback_functions.html

Chris Rackauckas [3:08 PM]
you probably want to link to recent docs. that's v2.0

Stefano Bettani
Oh thanks!

[slack] <ginkobab> Thanks for your answer!

Just to clarify, the weird block about updating sets the potential of the subsequent timestep equal to Vr or to 0, so the order is important, otherwise since 0 > threshold, the reset wouldn't work. So effectively it marks a spike that we can then extract!

I'm gonna look into callbacks now, thanks again!

[slack] <ginkobab> Really cool, that will work, and it lets me save only when spikes occur so I could save much memory and time
[slack] <ashton.bradley> is there a new PR to watch?
[slack] <chrisrackauckas> I don't think so. @shashi what do you think the path is to get this in there? I know SymbolicUtils already has it, so it can't be so far off?
In the Neural SDE setting, if we fit the data to the model, the model would ignore the noise, would go through the data exactly; the Brownian motion term goes to 0 (points made in the paper section 5). The paper used a latent model. How does DiffeqFlux solve this problem?
Jonathan Miller
How do I git this channel on my slack?
Git is get, working on a Dvorak keyboard
Christopher Rackauckas
@shrimpceviche look at the second tutorial in DiffEqFlux and you'll see how that's not the case
In fact, that argument doesn't even make sense. In many cases you need non zero noise to fit the data, otherwise you get trapped
6 replies
It's argued with examples or evidence, so I suggest you take a look at the examples and evidence which demonstrates that it doesn't occur, and it's clear from looking at the loss values why it can't occur.
Alex Wadell
Is there a mechanism to resume integration similar to odextend (https://www.mathworks.com/help/matlab/ref/odextend.html) in MATLAB? Specifically, I'm trying to switch callbacks after an event has occurred

Hi all. Is there a way to deal with this problem in Julia?

function explicit(y,p,t)
tspan = (0.0,π)
y0 = 0.0
problem = ODEProblem(explicit,y0,tspan)
sol = solve(problem, Rosenbrock23())

In MATLAB, Cleve Moler solve this problem by add a bound like this:
f = @(t,y) sqrt(1-min(y,1)^2).

This could also be used in Python with Scipy.odeint.
def f(t,y): return np.sqrt(1-np.min(y,1)**2)

But Julia don't allow this.

Dylan Colli

Hi everyone, first off, thanks for developing an awesome DE library/framework. It's really impressive just how much functionality you all have crammed into this library. I hope I'm in the right place to ask this.

I'm attempting to install the diffeqpy package to utilize the DifferentialEquations.jl library as a sort of "drop-in" solver to our already existing systems biology models written in Python since we need a speedup and easier parallelization options.

I'm on an Ubuntu 18.04 system and I've

  1. Installed Julia through the official binaries
  2. Installed the DifferentialEquations.jl package through Pkg
  3. Installed PyCall.jland built it with the Python binary I'm using with my pipenv vrirtual environment.
  4. Installed the diffeqpy package to the virtual environment via pipenv install (backended with pip, just in the virtual environment)
  5. Written benchmarking scripts for a system of Lorenz Attractors as described in https://github.com/SciML/diffeqpy written using:
    a. SciPy via Python 3.6
    b. DifferentialEquations.jl via Julia
    c. diffeqpy via python-jl
    d. diffeqpy via python-jl with only benchmarking the solving of the problem, not the specification via de.ODEProblem()
    e. diffeqpy via python-jl with only benchmarking the solving of the problem as in (d.) and also pre-compiling the problem via numba
  6. Executed the scripts and plotted the results.

These are the results I've obtained:

  • Diffeqpy took 13.09746799999997 seconds to complete
  • Diffeqpy Precompiled took 12.395007999999985 seconds to complete
  • Diffeqpy Numba Compiled took 12.467692999999949 seconds to complete
  • SciPy ODEINT took 0.004549000000000001 seconds to complete
  • DifferentialEquations.jl took 0.050286285 seconds to complete
    (tried to share graphically but couldn't figure out how to do that)

You can see the full code at https://github.com/dcolli23/benchmarking_diffeqpy

I'm almost certain I've done something wrong here since this performance is vastly different and I know DifferentialEquations.jl to be some of the fastest implementations of ODE solvers out there. Do you all have any idea what I could have done wrong in my setup of the tech stack or if I'm just interfacing with the solver incorrectly?

I seriously appreciate your all's help in all of this! Thank you so much! And I'm happy to provide any extra information as needed!

Srikanth Sivaramakrishnan

Hello, I am trying to fit an ODE with 8 external inputs from measurement data which are interpolated and I find BFGS very slow compared to BlackBoxOptim. I also tried ADAM from DiffEqFlux and it is also very slow. Does anyone here have any thoughts on why ?

I am using LinearInterpolation() objects to get the inputs at each time t

I have more details here

Srikanth Sivaramakrishnan
Model code snippet pasted below. All functions with_itp are LinearInterpolation() objects from Interpolations.jl, the rest are constants enclosed in a wrapper function.
function thermal_model!(du, u, p, t)
  # scaling parameters
  p_friction, p_forcedconv, p_tread2road, p_deflection,
  p_natconv, p_carcass2air, p_tread2carcass, p_air2ambient = p
  fxtire = fxtire_itp(t)
  fytire = fytire_itp(t)
  fztire = fztire_itp(t)
  vx = vx_itp(t)
  alpha = alpha_itp(t)
  kappa = kappa_itp(t)
  r_loaded = r_loaded_itp(t)
  h_splitter = h_splitter_itp(t)

  # arc length of tread area
   theta_1 = acos(min(r_loaded - h_splitter, r_unloaded) / r_unloaded)
   theta_2 = acos(min(r_loaded, r_unloaded) / r_unloaded)
  area_tread_forced_air = r_unloaded * (theta_1 - theta_2) * tire_width
  area_tread_contact = tire_width * 2 * sqrt(max(r_unloaded^2 - r_loaded^2, 0))

  q_friction = p_friction * vx * (abs(fytire * tan(alpha)) + abs(fxtire * kappa))
  q_tread2ambient_forcedconv = p_forcedconv * h_forcedconv * area_tread_forced_air * (t_tread - t_ambient) * vx^0.805
  q_tread2ambient_natconv = p_natconv * h_natconv * (area_tread - area_tread_contact) * (t_tread - t_ambient)
  q_tread2carcass = p_tread2carcass * h_tread2carcass * area_tread * (t_tread - t_carcass)
  q_carcass2air = p_carcass2air * h_carcass2air * area_tread * (t_carcass - t_air)
  q_carcass2ambient_natconv = p_natconv * h_natconv * area_sidewall * (t_carcass - t_ambient)
  q_tread2road = p_tread2road * h_tread2road * area_tread_contact * (t_tread - t_track)
  q_deflection = p_deflection * h_deflection * vx * abs(fztire)
  q_air2ambient = p_air2ambient * h_natconv * area_rim * (t_air - t_ambient)

    du[1] = der_t_tread = (q_friction - q_tread2carcass - q_tread2road - q_tread2ambient_forcedconv - q_tread2ambient_natconv)/(m_tread * cp_tread)
    du[2] = der_t_carcass = (q_tread2carcass + q_deflection - q_carcass2air - q_carcass2ambient_natconv)/(m_carcass * cp_carcass)
    du[3] = der_t_air = (q_carcass2air - q_air2ambient)/(m_air * cp_air)
Christopher Rackauckas
@awadell1 you might want to use the integrator interface directly: https://diffeq.sciml.ai/stable/basics/integrator/
@Undi123 hey sorry, the bridge to this older channel was down
Your issue was that your input was a vector and your output was a scalar, so your ODE isn't defined on a single space!
function explicit(y,p,t)
would be fine for your problem (just like the other cases)
Christopher Rackauckas
@dcolli23 did you precompile the library? Either run once before, or use PackageCompiler.jl?
Your code doesn't seem to show up, but my guess is that you're mostly measuring JIT time.
The Python side has some known performance issues though. I address it in https://www.stochasticlifestyle.com/why-numba-and-cython-are-not-substitutes-for-julia/ and there's a video for PyData coming out that discusses it a bit more.
BTW @awadell1 @Undi123 @dcolli23 @vsskanth I'm curious what links bring you to this channel, since we don't really advertise it much anymore since all of the discussion is on the Slack channel.
[slack] <oxinabox> OK that was a pain, should be back nowish
[slack] <Jonnie> Should be good to go now. I was missing an explicit conversion method from ComponentArray to Array. Having that method defined keeps the pointer reference—which Sundials needs—where the default conversion method doesn’t.

[slack] <torkel.loman> Is there something I can do to help solve https://discourse.julialang.org/t/strange-maxiters-numeric-instability-occured-when-solving-certain-sde/49392/9 ?

Happy to do some digging, but unsure what I should be looking for.

[slack] <chrisrackauckas> We do have an issue with this in Sundials though
[slack] <chrisrackauckas> while it technically works to give Sundials the function pointer to an array-backed object, there are cases like in Callbacks where you'll get the NVector, so it's not the perfect interface on our side
[slack] <Arno Strouwen> with the new MTK version the gradient towards alpha's of the following no longer works:
((((((1.0 * α₁ * (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0)))))))) / (α₂ + (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0))))))))) - (((1.0 * α₄) * x₁(t)) / (α₅ + x₁(t)))) - ((x₁(t) * α₆₂) / 0.1048)) + (((α₆₅ * x₃(t)) / ((α₆₆ + x₃(t)) * ((0.1048 * ((x₁(t) + x₇(t)) + ((((x₉(t) + x₁₀(t)) + x₁₁(t)) + x₁₂(t)) * 2))) / α₇₁))) / 0.1048)) + ((((x₉(t) + x₁₀(t)) + x₁₁(t)) + x₁₂(t)) * α₂₃)) - ((x₁(t) * x₅(t)) * α₂₄)
ERROR: Failed to apply rule ~~(z::_isone) * ~~x => ~x on expression (1.0 * (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0)))))))) * α₁
This is for sensitivity analysis of ODE's. and this is the only equation in my ODE system that explicitly depends on t, gradients of other equations work fine.
[slack] <chrisrackauckas> Hmm, thanks, good to know
[slack] <chrisrackauckas> @shashi we should probably make sure we get a mergable version of the sensitivity analysis transform.
[slack] <shashi> Could you post the whole stack trace? (There should be another Error message down below)
[slack] <shashi> Also some context to the code would be nice... Are you calling simplify? What is it being called on?
[slack] <Arno Strouwen> I'll clean up my code a bit and private message you.

[slack] <Peter J> My code does a lot of parameter-parallel ODE solving, and I'd like to do them on the GPU, but since a lot of julia is not supported on the GPU by DiffEqGPU (broadcast, matrix multiply...) would this idea work?

Given an ode described by f, an IC u_0 and a list of parameters [p_1, p_2,... p_n]. Create a new ode function big_f(du,u,p,t) , and w_0. Where w_0 is a cuarray consisting of u_0 concatenated n times, and big_f applies f seperately to each copy of u_0, each with a different p_i.

[slack] <chrisrackauckas> yeah that's how you'd do it by hand
[slack] <Peter J> cool, ok. Is DiffEqGPU transforming code to that behind-the-scenes?
[slack] <chrisrackauckas> essentially