Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 03:45
    ChrisRackauckas synchronize #672
  • 03:45

    ChrisRackauckas on nudge_factor

    fix function redefinition (compare)

  • 02:13

    github-actions[bot] on v5.58.0

    (compare)

  • 01:36
    mohamed82008 starred SciML/NeuralPDE.jl
  • 01:17
    ChrisRackauckas synchronize #672
  • 01:17

    ChrisRackauckas on nudge_factor

    Fix nudge direction (compare)

  • 01:07
    JuliaTagBot commented #1295
  • 00:44

    ChrisRackauckas on mtk

    (compare)

  • 00:44

    ChrisRackauckas on master

    WIP: Generalize modelingtoolkit… add modelingtoolkitize test for… remove show and 2 more (compare)

  • 00:44
    ChrisRackauckas closed #1055
  • 00:44
    ChrisRackauckas closed #1054
  • 00:43
    JuliaRegistrator commented #845
  • 00:43
    ChrisRackauckas commented #845
  • 00:43

    ChrisRackauckas on master

    Update Project.toml (compare)

  • 00:43

    ChrisRackauckas on dae_reshape

    (compare)

  • 00:43

    ChrisRackauckas on master

    Allow non-vector DAEs Fixes ht… fix up resids Merge pull request #1428 from S… (compare)

  • 00:43
    ChrisRackauckas closed #757
  • 00:43
    ChrisRackauckas closed #1428
  • 00:38
    ChrisRackauckas synchronize #672
  • 00:38

    ChrisRackauckas on nudge_factor

    add more test info (compare)

BridgingBot
@GitterIRCbot
[slack] <Lütfullah Tomak> How the continuous callback conditions is stored? I have a callback set with some continuous callbacks but their condition functions differ in returning value type. Does this cause type unstability or hurt performance?
BridgingBot
@GitterIRCbot
[slack] <atiyo ghosh> Actually ApproxFun seems pretty well-featured.
JeremyB
@jrmbr

Hey ! I started using DifferentialsEquations recently and I have some small questions : I am trying to solve a simple ODE (a ball in a particular potential) but the potential I need to use is given to me as a .npy file. My workaround has been for now to interpolate the given array and compute the gradient in the derivative function.

const tstart = 0.0
const tend = 1e-5
const tstep = 5e-8

trap = npzread("somefile.npy")
x = npzread("somefile2.npy")
z = npzread("somefile3.npy")

# linear interpolation
interpolated_potential = LinearInterpolation((x, z), trap,extrapolation_bc=0)

function evolve(dz,z,p,t)
    p₁, p₂, q₁, q₂ = z[1], z[2], z[3], z[4]

    dp₁,dp₂ = -Interpolations.gradient(interpolated_potential,q₁,q₂)
    dq₁ = p₁
    dq₂ = p₂
    dz .= [dp₁, dp₂, dq₁, dq₂]
    return nothing
end

probl_func = ODEFunction(evolve,syms=[:vx,:vy,:x,:y])

function condition(u,t,integrator)
    u[3]>=x[end] || u[3]<=x[1] || u[4]>=z[end] || u[4]<=z[1]
end

function affect!(integrator)
    terminate!(integrator)
end

cb = DiscreteCallback(condition,affect!)

u₀ = [randvelocity()...,randpos()...]
probl = ODEProblem(probl_func, u₀, (tstart, tend))
sol = solve(probl, Vern9(),dt=tstep, abstol=1e-6, reltol=1e-6,adaptive=false,callback = cb)

However I run in some issues: When/If the ball is leaving the interpolated region, it throws an error. I tried to catch it with the callback, but for some reason, for some initial conditions it still crashes. Also I got a lot of allocations from the call to gradient in the derivative function it seems...

Is there a better way to deal with this problem ?

BridgingBot
@GitterIRCbot

[slack] <torkel.loman> That would be cool, never actually heard of canard explosions, but if you have the code for it and that model I'd love to check it out. Tried to figure out what they were, and had a look at this one: http://www.scholarpedia.org/article/Canards Is that a good summary of the phenomena (or do you have some other reference you'd recommend)?

Concerning the API. I think the probably is that it is quite heavy, especially for someone who is not very familiar with this kind of stuff. Generally I often have this problem with the BifurcationKit docs (I guess one problem is that this is inherently not that simple, and that it is a powerful package with rather large capabilities). Things sometimes gets rather overwhelming, e.g. following https://rveltz.github.io/BifurcationKit.jl/dev/tutorials3/#Brusselator-1d-(automatic)-1 to get some help with periodic orbits, there's a lot of stuff there! Initially I was just looking for something very simple (going from a system function+jacobian and a parameter range, to a plot of the bifurcation points and the periodic orbits). But looking through that tutorial, I'm not even really sure what all the plots mean, and what their output are (in this case I'm more interested in the non-spatial case, which probably is a bit different).

I wouldn't be to worried, the package is awesome, and all of the information is there. It is just that in cases like this, it was just easier not to go ahead and make the periodic orbits, rather than trying to tackle the documentation (but if you wanted to expand the tutorials, some very simple examples of e.g. bifurcation diagrams for simple models, would be useful).

BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> ApproxFun is well-featured and you can just use it with DiffEq without the connection package now.
[slack] <chrisrackauckas> it won't store it, that should be fine.
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> I have a customized Newton's method implementation that I made to resolve the temperature/enthalpy conservation law for freezing curves... and I had to add a backtracking line search to make it work (have to avoid jumping over the discontinuity), so I figured the same principle could/should be applied to an implicit integrator.
[slack] <chrisrackauckas> it could/should
[slack] <chrisrackauckas> I was just talking to @yingbo_ma about SciML/OrdinaryDiffEq.jl#1399 the other day./
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> Is there a way to access the integrator from a ManifoldProjection callback? Could be useful to have access to f if it's a callable struct with relevant fields... but then again I guess g could also be a callable struct.
BridgingBot
@GitterIRCbot
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or I'm missing something more fundamental ?: https://files.slack.com/files-pri/T68168MUP-F021MUSQVUK/download/screenshot_20210512_010609.png
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the similar results even after second run, is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the similar results even after second run. Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives similar results even after second run. Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
BridgingBot
@GitterIRCbot
[slack] <wilwxk> I know that @btime is better for benchmark but @time is expressing better the real time. It gives similar results even after second run.
Why solve takes a lot of time when I declare the same ODESystem again ? Shoudnt it be compiled and fine after the first run ? Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> I know that @btime is better for benchmark but @time is expressing better the real time. It gives similar results even after second run.
Why solve takes more time with a newly declared problem than a already solved problem if they are exactly the same ? Shoudnt it be compiled and fine after the first run ? Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> I know that @btime is better for benchmark but @time is expressing better the real time. It gives similar results even after second run.
Why solve takes more time with a newly declared problem than a already solved problem if they are exactly the same ? Shoudnt it be compiled and fine after the first run ? Is this related to how ModelingToolkit works or am I missing something more fundamental ?
BridgingBot
@GitterIRCbot
[slack] <mason.protter> Given that we have no idea what the code is that you're running, anything anyone can say would be a shot-in-the-dark guess
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> I'm not 100% sure, but I think ModelingToolkit does some stuff under the hood to convert the ODESystem into an ODEFunction. I wouldn't be surprised if it has to create runtime generated functions, which would be invalidated and recompiled on each construction of ODEProblem. Have you tried using remake instead?
BridgingBot
@GitterIRCbot
[slack] <Pieter Gunnink> When using ModellingToolkit.jl , is there a way for a solver algorithm to know about the ordering of the variables? I have a custom solver step that I need to inject, which needs to know which variable is which. To be more precise, I have a (N,3) problem, with 3 cartesian components and I want to use a normalized Euler-Heun step, such that the norm of the individual vectors is 1. This cannot be implemented with a Callback, because I need to normalize also the half-step inside the Euler-Heun algorithm. I have modellingtoolkit.jl make a 3N problem, which works well, but in order to normalize I need to be sure that the ordering does not get switched around (or know the ordering). Is the ordering available inside perform_step!(integrator, cache)?
BridgingBot
@GitterIRCbot

[slack] <rveltz> The section Canard Cycles and their Explosion of http://www.scholarpedia.org/article/Canards is a good start.

but if you wanted to expand the tutorials, some very simple examples of e.g. bifurcation diagrams for simple models, would be useful
I added a section for ODE in the tutorials. The issue is that ODE requires parameters different from PDE. So I guess it would be simpler if I had a BifODEProblem and a BifPDEProblem and dispatch on those.

The real question is if you think this is too overwhelming :https://rveltz.github.io/BifurcationKit.jl/dev/tutorialsODE/#Branch-of-periodic-orbits-with-finite-differences-1

BridgingBot
@GitterIRCbot

[slack] <rveltz> Code for canard explosion:
https://gist.github.com/rveltz/15f09eab9a325b069a3239be0713e803

You have to zoom a lot to see the explosion

Note that this is a crude computation. For canard explosion, you need an adaptive algo. Plus the period grows very quickly which is bad for backward euler. You could use a shooting to get better results (I guess).

BridgingBot
@GitterIRCbot
[slack] <rveltz> There must be a tiny parameter in your model
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> MTK is building new functions. You don't want to recompile each time.
[slack] <chrisrackauckas> Use the varmap_to_vars
BridgingBot
@GitterIRCbot
[slack] <wilwxk> <@U8D9768Q6> Im testing the <https://github.com/SciML/ModelingToolkit.jl|second example> in the readme of modelingtoolkit.
<@U01H36BUDJB> I tested with remake , I could create the problem but the solve failed. Is there any detailed tutorial there ?
So, its creating the differential equation system from the model again ? If I extract the function with generate_function(odesys) should it work faster ?: https://files.slack.com/files-pri/T68168MUP-F021KUML44V/download/screenshot_20210512_084903.png
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> I don't get your question
BridgingBot
@GitterIRCbot
[slack] <wilwxk> I just didn't understand the part of MKT building new functions, you are talking about the function f(u0, p, t) needed to create any generic new OdeProblem or you are talking about some internal functions ?
And to make it faster, is remake the best solution if I only want to change parameters and initial state and run it again ?
[slack] <chrisrackauckas> Yes, when you build a new ODEProblem from MTK it creates a new function. You want to just do that once, and then just remake with new parameters.
BridgingBot
@GitterIRCbot
[slack] <wilwxk> Got it, just one more question, when I save f = eval(generate_function(odesys)[2]) and use it with ODEProblem repeatedly, is it creating a new raw f(u0, p, t) and bypassing the MTK build time ?
[slack] <chrisrackauckas> that would bypass it
BridgingBot
@GitterIRCbot
[slack] <Pieter Gunnink> Thanks, I see it now. When is the ordering determined actually? When you run ODEProblem() or before?
[slack] <Andrew Leonard> for a linear system du = A*u is the condition number of A a "measure" of the stiffness of the system?
[slack] <chrisrackauckas> yes
[slack] <chrisrackauckas> See https://www.youtube.com/watch?v=FENK1SDvPiA for a much more precise discussion of the property.
[slack] <chrisrackauckas> It's determined before, by the ordering of the symbolic states(sys) vector.
[slack] <Andrew Leonard> awesome, thanks!
BridgingBot
@GitterIRCbot
[slack] <Pieter Gunnink> I see, thanks!
BridgingBot
@GitterIRCbot

[slack] <torkel.loman> That link is better yes, overlooked it. Yes, the PDE things might have some to do with it, I've never really worked much with PDE, so usually get out of my depth very quickly there.

Thanks for teh Canard code, its great. I get a AssertionError: The provided index does not refer to a Hopf Point for the last thing (br_po, utrap = cotninuation(...). It is after I switch the third argument from 3 to 2, I will have a look to see if I can figure it out.

Also, what do you mean with there being a tiny paraemter in the model? Sounds like useful information!

BridgingBot
@GitterIRCbot

[slack] <JeremyB> Hey ! I started using DifferentialsEquations recently and I have some small questions : I am trying to solve a simple ODE (a ball in a particular potential) but the potential I need to use is given to me as a .npy file. My workaround has been for now to interpolate the given array and compute the gradient in the derivative function.
```const tstart = 0.0
const tend = 1e-5
const tstep = 5e-8

trap = npzread("somefile.npy")
x = npzread("somefile2.npy")
z = npzread("somefile3.npy")

linear interpolation

interpolated_potential = LinearInterpolation((x, z), trap,extrapolation_bc=0)

function evolve(dz,z,p,t)
p₁, p₂, q₁, q₂ = z[1], z[2], z[3], z[4]

dp₁,dp₂ = -Interpolations.gradient(interpolated_potential,q₁,q₂)
dq₁ = p₁
dq₂ = p₂
dz .= [dp₁, dp₂, dq₁, dq₂]
return nothing

end

probl_func = ODEFunction(evolve,syms=[:vx,:vy,:x,:y])

function condition(u,t,integrator)
u[3]>=x[end] || u[3]<=x[1] || u[4]>=z[end] || u[4]<=z[1]
end

function affect!(integrator)
terminate!(integrator)
end

cb = DiscreteCallback(condition,affect!)

u₀ = [randvelocity()...,randpos()...]
probl = ODEProblem(probl_func, u₀, (tstart, tend))
sol = solve(probl, Vern9(),dt=tstep, abstol=1e-6, reltol=1e-6,adaptive=false,callback = cb)```
However I run in some issues: When/If the ball is leaving the interpolated region, it throws an error. I tried to catch it with the callback, but for some reason, for some initial conditions it still crashes. Also I got a lot of allocations from the call to gradient in the derivative function it seems...
Is there a better way to deal with this problem ?

BridgingBot
@GitterIRCbot
[slack] <contradict> You might try isoutofdomain https://diffeq.sciml.ai/stable/basics/common_solver_opts/#Miscellaneous to prevent escape.
BridgingBot
@GitterIRCbot
[slack] <contradict> What error do you get when it does escape?
BridgingBot
@GitterIRCbot
[slack] <Dan Padilha> When using EnsembleProblem with EnsemebleThreads, it works fine simulating my specific problem up to ~~100 trajectories (on 4 threads), but if I push that further to for example ~~1000 trajectories, it is much, much slower (as in, much more than an order of magnitude slower that I'd expect to see in the worst case). I assume this is an issue of memory management (I think solving my problems are allocating a fair bit of memory). Has anyone seen this kind of behaviour, or know what I could do to resolve it with the least amount of effort? Should I maybe look into multi-processing instead of multi-threaded?
BridgingBot
@GitterIRCbot
[slack] <rveltz> the mistake is because br.bifpoint[2] is not a Hopf point
[slack] <rveltz> > Also, what do you mean with there being a tiny paraemter in the model?
That’s my feeling from the simulation. However sometimes, this is burried in. the ODE. and requires specific investigation
BridgingBot
@GitterIRCbot
[slack] <Simon Welker> I just had this happen as well -- to me it looks like I ran out of physical memory going from 1k to 10k trajectories, so my system was constantly swapping. I got around it by using dense=false, which saved enough memory for it to work for me
[slack] <Simon Welker> not sure if it's the same cause for you -- just something to try