Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Christopher Rackauckas
@shrimpceviche look at the second tutorial in DiffEqFlux and you'll see how that's not the case
In fact, that argument doesn't even make sense. In many cases you need non zero noise to fit the data, otherwise you get trapped
6 replies
It's argued with examples or evidence, so I suggest you take a look at the examples and evidence which demonstrates that it doesn't occur, and it's clear from looking at the loss values why it can't occur.
Alex Wadell
Is there a mechanism to resume integration similar to odextend (https://www.mathworks.com/help/matlab/ref/odextend.html) in MATLAB? Specifically, I'm trying to switch callbacks after an event has occurred

Hi all. Is there a way to deal with this problem in Julia?

function explicit(y,p,t)
tspan = (0.0,π)
y0 = 0.0
problem = ODEProblem(explicit,y0,tspan)
sol = solve(problem, Rosenbrock23())

In MATLAB, Cleve Moler solve this problem by add a bound like this:
f = @(t,y) sqrt(1-min(y,1)^2).

This could also be used in Python with Scipy.odeint.
def f(t,y): return np.sqrt(1-np.min(y,1)**2)

But Julia don't allow this.

Dylan Colli

Hi everyone, first off, thanks for developing an awesome DE library/framework. It's really impressive just how much functionality you all have crammed into this library. I hope I'm in the right place to ask this.

I'm attempting to install the diffeqpy package to utilize the DifferentialEquations.jl library as a sort of "drop-in" solver to our already existing systems biology models written in Python since we need a speedup and easier parallelization options.

I'm on an Ubuntu 18.04 system and I've

  1. Installed Julia through the official binaries
  2. Installed the DifferentialEquations.jl package through Pkg
  3. Installed PyCall.jland built it with the Python binary I'm using with my pipenv vrirtual environment.
  4. Installed the diffeqpy package to the virtual environment via pipenv install (backended with pip, just in the virtual environment)
  5. Written benchmarking scripts for a system of Lorenz Attractors as described in https://github.com/SciML/diffeqpy written using:
    a. SciPy via Python 3.6
    b. DifferentialEquations.jl via Julia
    c. diffeqpy via python-jl
    d. diffeqpy via python-jl with only benchmarking the solving of the problem, not the specification via de.ODEProblem()
    e. diffeqpy via python-jl with only benchmarking the solving of the problem as in (d.) and also pre-compiling the problem via numba
  6. Executed the scripts and plotted the results.

These are the results I've obtained:

  • Diffeqpy took 13.09746799999997 seconds to complete
  • Diffeqpy Precompiled took 12.395007999999985 seconds to complete
  • Diffeqpy Numba Compiled took 12.467692999999949 seconds to complete
  • SciPy ODEINT took 0.004549000000000001 seconds to complete
  • DifferentialEquations.jl took 0.050286285 seconds to complete
    (tried to share graphically but couldn't figure out how to do that)

You can see the full code at https://github.com/dcolli23/benchmarking_diffeqpy

I'm almost certain I've done something wrong here since this performance is vastly different and I know DifferentialEquations.jl to be some of the fastest implementations of ODE solvers out there. Do you all have any idea what I could have done wrong in my setup of the tech stack or if I'm just interfacing with the solver incorrectly?

I seriously appreciate your all's help in all of this! Thank you so much! And I'm happy to provide any extra information as needed!

Srikanth Sivaramakrishnan

Hello, I am trying to fit an ODE with 8 external inputs from measurement data which are interpolated and I find BFGS very slow compared to BlackBoxOptim. I also tried ADAM from DiffEqFlux and it is also very slow. Does anyone here have any thoughts on why ?

I am using LinearInterpolation() objects to get the inputs at each time t

I have more details here

Srikanth Sivaramakrishnan
Model code snippet pasted below. All functions with_itp are LinearInterpolation() objects from Interpolations.jl, the rest are constants enclosed in a wrapper function.
function thermal_model!(du, u, p, t)
  # scaling parameters
  p_friction, p_forcedconv, p_tread2road, p_deflection,
  p_natconv, p_carcass2air, p_tread2carcass, p_air2ambient = p
  fxtire = fxtire_itp(t)
  fytire = fytire_itp(t)
  fztire = fztire_itp(t)
  vx = vx_itp(t)
  alpha = alpha_itp(t)
  kappa = kappa_itp(t)
  r_loaded = r_loaded_itp(t)
  h_splitter = h_splitter_itp(t)

  # arc length of tread area
   theta_1 = acos(min(r_loaded - h_splitter, r_unloaded) / r_unloaded)
   theta_2 = acos(min(r_loaded, r_unloaded) / r_unloaded)
  area_tread_forced_air = r_unloaded * (theta_1 - theta_2) * tire_width
  area_tread_contact = tire_width * 2 * sqrt(max(r_unloaded^2 - r_loaded^2, 0))

  q_friction = p_friction * vx * (abs(fytire * tan(alpha)) + abs(fxtire * kappa))
  q_tread2ambient_forcedconv = p_forcedconv * h_forcedconv * area_tread_forced_air * (t_tread - t_ambient) * vx^0.805
  q_tread2ambient_natconv = p_natconv * h_natconv * (area_tread - area_tread_contact) * (t_tread - t_ambient)
  q_tread2carcass = p_tread2carcass * h_tread2carcass * area_tread * (t_tread - t_carcass)
  q_carcass2air = p_carcass2air * h_carcass2air * area_tread * (t_carcass - t_air)
  q_carcass2ambient_natconv = p_natconv * h_natconv * area_sidewall * (t_carcass - t_ambient)
  q_tread2road = p_tread2road * h_tread2road * area_tread_contact * (t_tread - t_track)
  q_deflection = p_deflection * h_deflection * vx * abs(fztire)
  q_air2ambient = p_air2ambient * h_natconv * area_rim * (t_air - t_ambient)

    du[1] = der_t_tread = (q_friction - q_tread2carcass - q_tread2road - q_tread2ambient_forcedconv - q_tread2ambient_natconv)/(m_tread * cp_tread)
    du[2] = der_t_carcass = (q_tread2carcass + q_deflection - q_carcass2air - q_carcass2ambient_natconv)/(m_carcass * cp_carcass)
    du[3] = der_t_air = (q_carcass2air - q_air2ambient)/(m_air * cp_air)
Christopher Rackauckas
@awadell1 you might want to use the integrator interface directly: https://diffeq.sciml.ai/stable/basics/integrator/
@Undi123 hey sorry, the bridge to this older channel was down
Your issue was that your input was a vector and your output was a scalar, so your ODE isn't defined on a single space!
function explicit(y,p,t)
would be fine for your problem (just like the other cases)
Christopher Rackauckas
@dcolli23 did you precompile the library? Either run once before, or use PackageCompiler.jl?
Your code doesn't seem to show up, but my guess is that you're mostly measuring JIT time.
The Python side has some known performance issues though. I address it in https://www.stochasticlifestyle.com/why-numba-and-cython-are-not-substitutes-for-julia/ and there's a video for PyData coming out that discusses it a bit more.
BTW @awadell1 @Undi123 @dcolli23 @vsskanth I'm curious what links bring you to this channel, since we don't really advertise it much anymore since all of the discussion is on the Slack channel.
[slack] <oxinabox> OK that was a pain, should be back nowish
[slack] <Jonnie> Should be good to go now. I was missing an explicit conversion method from ComponentArray to Array. Having that method defined keeps the pointer reference—which Sundials needs—where the default conversion method doesn’t.

[slack] <torkel.loman> Is there something I can do to help solve https://discourse.julialang.org/t/strange-maxiters-numeric-instability-occured-when-solving-certain-sde/49392/9 ?

Happy to do some digging, but unsure what I should be looking for.

[slack] <chrisrackauckas> We do have an issue with this in Sundials though
[slack] <chrisrackauckas> while it technically works to give Sundials the function pointer to an array-backed object, there are cases like in Callbacks where you'll get the NVector, so it's not the perfect interface on our side
[slack] <Arno Strouwen> with the new MTK version the gradient towards alpha's of the following no longer works:
((((((1.0 * α₁ * (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0)))))))) / (α₂ + (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0))))))))) - (((1.0 * α₄) * x₁(t)) / (α₅ + x₁(t)))) - ((x₁(t) * α₆₂) / 0.1048)) + (((α₆₅ * x₃(t)) / ((α₆₆ + x₃(t)) * ((0.1048 * ((x₁(t) + x₇(t)) + ((((x₉(t) + x₁₀(t)) + x₁₁(t)) + x₁₂(t)) * 2))) / α₇₁))) / 0.1048)) + ((((x₉(t) + x₁₀(t)) + x₁₁(t)) + x₁₂(t)) * α₂₃)) - ((x₁(t) * x₅(t)) * α₂₄)
ERROR: Failed to apply rule ~~(z::_isone) * ~~x => ~x on expression (1.0 * (0.0 + (10000.0 * (1.0 - (1.0 / (1.0 + exp(-20.0 * (t - 24.0)))))))) * α₁
This is for sensitivity analysis of ODE's. and this is the only equation in my ODE system that explicitly depends on t, gradients of other equations work fine.
[slack] <chrisrackauckas> Hmm, thanks, good to know
[slack] <chrisrackauckas> @shashi we should probably make sure we get a mergable version of the sensitivity analysis transform.
[slack] <shashi> Could you post the whole stack trace? (There should be another Error message down below)
[slack] <shashi> Also some context to the code would be nice... Are you calling simplify? What is it being called on?
[slack] <Arno Strouwen> I'll clean up my code a bit and private message you.

[slack] <Peter J> My code does a lot of parameter-parallel ODE solving, and I'd like to do them on the GPU, but since a lot of julia is not supported on the GPU by DiffEqGPU (broadcast, matrix multiply...) would this idea work?

Given an ode described by f, an IC u_0 and a list of parameters [p_1, p_2,... p_n]. Create a new ode function big_f(du,u,p,t) , and w_0. Where w_0 is a cuarray consisting of u_0 concatenated n times, and big_f applies f seperately to each copy of u_0, each with a different p_i.

[slack] <chrisrackauckas> yeah that's how you'd do it by hand
[slack] <Peter J> cool, ok. Is DiffEqGPU transforming code to that behind-the-scenes?
[slack] <chrisrackauckas> essentially
[slack] <chrisrackauckas> if it's a stiff problem or has callbacks it's handling a few other things too
[slack] <Marius Millea> Is it intentional that using DifferentialEquations changes IJulia printing of floats?: https://files.slack.com/files-pri/T68168MUP-F01EZM1UKPA/download/image.png
[slack] <yingbo_ma> Woah, that’s new
[slack] <Marius Millea> This is DifferentialEquations v6.15.0 / IJulia v1.21.3 / Julia 1.5.0
[slack] <Marius Millea> Thanks @Jonnie, your fix worked for me (I'm not using callbacks)
[slack] <chrisrackauckas> 🤷 we don't overload float printing
[slack] <yingbo_ma> I will try it out.
[slack] <ashton.bradley> font change?
[slack] <Andreas Schlaginhaufen> For the same model as above (MyModel) which includes gradients in the forward pass, calculating gradients with respect to the input and the parameters works. However, using it in a neural ODE, I get an error in the adjoint method. I tried ReverseDiffAdjoint(), BacksolveAdjoint(), InterpolatingAdjoint(). The error is "MethodError: no method matching flatten(::Array{Float64,2})". I am a bit lost as this only occurs in the adjoint calculation and also because flatten(::Array{Float64,2}) is working in my julia.: https://files.slack.com/files-pri/T68168MUP-F01E53ZFGUW/download/untitled
[slack] <Arno Strouwen> Did you find my PM?
[slack] <Alex Wadell> Is there a way to turn on/off callbacks via other callback?
[slack] <chrisrackauckas> make a parameter for that and then for a discrete callback do stillon && ...
[slack] <chrisrackauckas> for a continuous callback do ifelse(stillon,1,...)
[slack] <chrisrackauckas> that way stillon == false causes the callback to not fire no matter what.
[slack] <shashi> Yeah i have a pr open which fixes this
[slack] <shashi> SciML/ModelingToolkit.jl#648 I’ll fix the tests in a bit
[slack] <matbesancon> not sure if this is known already but the latest DiffEq solve seems to break with DataStructures 0.18. A method is not implemented for iterate.
The error can be reproduced with the first example of the tutorial

[slack] <Andreas Schlaginhaufen> Is it possible that adjoints methods stop working when the ODE function includes a Zygote.gradient() call? For me this always leads to strange errors like "MethodError: no method matching mul!(::Array{Float64,1}, ::Array{Float64,2}, ::LinearAlgebra.Adjoint{Float64,Array{Float64,1}}, ::Bool, ::Bool)".

Here is a simple example (gradients of ODE function work, but adjoint method is failing):: https://files.slack.com/files-pri/T68168MUP-F01EEJNVDR9/download/mwe

[slack] <Andreas Schlaginhaufen> Is it possible that adjoints methods stop working when the ODE function includes a Zygote.gradient() call? For me this always leads to problems like "MethodError: no method matching mul!(::Array{Float64,1}, ::Array{Float64,2}, ::LinearAlgebra.Adjoint{Float64,Array{Float64,1}}, ::Bool, ::Bool)".

Here is a simple example (gradients of ODE function work, but adjoint method is failing):