ChrisRackauckas on nudge_factor
fix function redefinition (compare)
github-actions[bot] on v5.58.0
ChrisRackauckas on nudge_factor
Fix nudge direction (compare)
ChrisRackauckas on mtk
ChrisRackauckas on master
WIP: Generalize modelingtoolkit… add modelingtoolkitize test for… remove show and 2 more (compare)
ChrisRackauckas on master
Update Project.toml (compare)
ChrisRackauckas on dae_reshape
ChrisRackauckas on master
Allow non-vector DAEs Fixes ht… fix up resids Merge pull request #1428 from S… (compare)
ChrisRackauckas on nudge_factor
add more test info (compare)
affect!
by updating my integrator.u
(which is a DEDataVector
tracking some other data as well, so essentially integrator.u.event_trigger_idx = length(integrator.sol.t)
) and just taking the unique values in the resulting [u.event_trigger_idx for u in solution]
.
Hey ! I started using DifferentialsEquations recently and I have some small questions : I am trying to solve a simple ODE (a ball in a particular potential) but the potential I need to use is given to me as a .npy file. My workaround has been for now to interpolate the given array and compute the gradient in the derivative function.
const tstart = 0.0
const tend = 1e-5
const tstep = 5e-8
trap = npzread("somefile.npy")
x = npzread("somefile2.npy")
z = npzread("somefile3.npy")
# linear interpolation
interpolated_potential = LinearInterpolation((x, z), trap,extrapolation_bc=0)
function evolve(dz,z,p,t)
p₁, p₂, q₁, q₂ = z[1], z[2], z[3], z[4]
dp₁,dp₂ = -Interpolations.gradient(interpolated_potential,q₁,q₂)
dq₁ = p₁
dq₂ = p₂
dz .= [dp₁, dp₂, dq₁, dq₂]
return nothing
end
probl_func = ODEFunction(evolve,syms=[:vx,:vy,:x,:y])
function condition(u,t,integrator)
u[3]>=x[end] || u[3]<=x[1] || u[4]>=z[end] || u[4]<=z[1]
end
function affect!(integrator)
terminate!(integrator)
end
cb = DiscreteCallback(condition,affect!)
u₀ = [randvelocity()...,randpos()...]
probl = ODEProblem(probl_func, u₀, (tstart, tend))
sol = solve(probl, Vern9(),dt=tstep, abstol=1e-6, reltol=1e-6,adaptive=false,callback = cb)
However I run in some issues: When/If the ball is leaving the interpolated region, it throws an error. I tried to catch it with the callback, but for some reason, for some initial conditions it still crashes. Also I got a lot of allocations from the call to gradient in the derivative function it seems...
Is there a better way to deal with this problem ?
[slack] <torkel.loman> That would be cool, never actually heard of canard explosions, but if you have the code for it and that model I'd love to check it out. Tried to figure out what they were, and had a look at this one: http://www.scholarpedia.org/article/Canards Is that a good summary of the phenomena (or do you have some other reference you'd recommend)?
Concerning the API. I think the probably is that it is quite heavy, especially for someone who is not very familiar with this kind of stuff. Generally I often have this problem with the BifurcationKit docs (I guess one problem is that this is inherently not that simple, and that it is a powerful package with rather large capabilities). Things sometimes gets rather overwhelming, e.g. following https://rveltz.github.io/BifurcationKit.jl/dev/tutorials3/#Brusselator-1d-(automatic)-1 to get some help with periodic orbits, there's a lot of stuff there! Initially I was just looking for something very simple (going from a system function+jacobian and a parameter range, to a plot of the bifurcation points and the periodic orbits). But looking through that tutorial, I'm not even really sure what all the plots mean, and what their output are (in this case I'm more interested in the non-spatial case, which probably is a bit different).
I wouldn't be to worried, the package is awesome, and all of the information is there. It is just that in cases like this, it was just easier not to go ahead and make the periodic orbits, rather than trying to tackle the documentation (but if you wanted to expand the tutorials, some very simple examples of e.g. bifurcation diagrams for simple models, would be useful).
@btime
is better for benchmark but @time
is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or I'm missing something more fundamental ?: https://files.slack.com/files-pri/T68168MUP-F021MUSQVUK/download/screenshot_20210512_010609.png
@btime
is better for benchmark but @time
is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives the similar results even after second run, is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives the similar results even after second run. Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives similar results even after second run. Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives similar results even after second run.solve
takes a lot of time when I declare the same ODESystem again ? Shoudnt it be compiled and fine after the first run ? Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives similar results even after second run.solve
takes more time with a newly declared problem than a already solved problem if they are exactly the same ? Shoudnt it be compiled and fine after the first run ? Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives similar results even after second run.solve
takes more time with a newly declared problem than a already solved problem if they are exactly the same ? Shoudnt it be compiled and fine after the first run ? Is this related to how ModelingToolkit works or am I missing something more fundamental ?
remake
instead?
ModellingToolkit.jl
, is there a way for a solver algorithm to know about the ordering of the variables? I have a custom solver step that I need to inject, which needs to know which variable is which. To be more precise, I have a (N,3) problem, with 3 cartesian components and I want to use a normalized Euler-Heun step, such that the norm of the individual vectors is 1. This cannot be implemented with a Callback, because I need to normalize also the half-step inside the Euler-Heun algorithm. I have modellingtoolkit.jl
make a 3N problem, which works well, but in order to normalize I need to be sure that the ordering does not get switched around (or know the ordering). Is the ordering available inside perform_step!(integrator, cache)
?
[slack] <rveltz> The section Canard Cycles and their Explosion of http://www.scholarpedia.org/article/Canards is a good start.
but if you wanted to expand the tutorials, some very simple examples of e.g. bifurcation diagrams for simple models, would be useful
I added a section for ODE in the tutorials. The issue is that ODE requires parameters different from PDE. So I guess it would be simpler if I had a BifODEProblem and a BifPDEProblem and dispatch on those.
The real question is if you think this is too overwhelming :https://rveltz.github.io/BifurcationKit.jl/dev/tutorialsODE/#Branch-of-periodic-orbits-with-finite-differences-1
[slack] <rveltz> Code for canard explosion:
https://gist.github.com/rveltz/15f09eab9a325b069a3239be0713e803
You have to zoom a lot to see the explosion
Note that this is a crude computation. For canard explosion, you need an adaptive algo. Plus the period grows very quickly which is bad for backward euler. You could use a shooting to get better results (I guess).
remake
, I could create the problem but the solve failed. Is there any detailed tutorial there ?generate_function(odesys)
should it work faster ?: https://files.slack.com/files-pri/T68168MUP-F021KUML44V/download/screenshot_20210512_084903.png
f(u0, p, t)
needed to create any generic new OdeProblem or you are talking about some internal functions ?remake
the best solution if I only want to change parameters and initial state and run it again ?
ODEProblem
from MTK it creates a new function. You want to just do that once, and then just remake
with new parameters.
du = A*u
is the condition number of A a "measure" of the stiffness of the system?
states(sys)
vector.
[slack] <torkel.loman> That link is better yes, overlooked it. Yes, the PDE things might have some to do with it, I've never really worked much with PDE, so usually get out of my depth very quickly there.
Thanks for teh Canard code, its great. I get a AssertionError: The provided index does not refer to a Hopf Point
for the last thing (br_po, utrap = cotninuation(...
). It is after I switch the third argument from 3
to 2
, I will have a look to see if I can figure it out.
Also, what do you mean with there being a tiny paraemter in the model? Sounds like useful information!
[slack] <JeremyB> Hey ! I started using DifferentialsEquations recently and I have some small questions : I am trying to solve a simple ODE (a ball in a particular potential) but the potential I need to use is given to me as a .npy file. My workaround has been for now to interpolate the given array and compute the gradient in the derivative function.
```const tstart = 0.0
const tend = 1e-5
const tstep = 5e-8
trap = npzread("somefile.npy")
x = npzread("somefile2.npy")
z = npzread("somefile3.npy")
interpolated_potential = LinearInterpolation((x, z), trap,extrapolation_bc=0)
function evolve(dz,z,p,t)
p₁, p₂, q₁, q₂ = z[1], z[2], z[3], z[4]
dp₁,dp₂ = -Interpolations.gradient(interpolated_potential,q₁,q₂)
dq₁ = p₁
dq₂ = p₂
dz .= [dp₁, dp₂, dq₁, dq₂]
return nothing
end
probl_func = ODEFunction(evolve,syms=[:vx,:vy,:x,:y])
function condition(u,t,integrator)
u[3]>=x[end] || u[3]<=x[1] || u[4]>=z[end] || u[4]<=z[1]
end
function affect!(integrator)
terminate!(integrator)
end
cb = DiscreteCallback(condition,affect!)
u₀ = [randvelocity()...,randpos()...]
probl = ODEProblem(probl_func, u₀, (tstart, tend))
sol = solve(probl, Vern9(),dt=tstep, abstol=1e-6, reltol=1e-6,adaptive=false,callback = cb)```
However I run in some issues: When/If the ball is leaving the interpolated region, it throws an error. I tried to catch it with the callback, but for some reason, for some initial conditions it still crashes. Also I got a lot of allocations from the call to gradient in the derivative function it seems...
Is there a better way to deal with this problem ?