frankschae on concrete_solve_lss
fix signs (compare)
KirillZubov on loss_each_equation
fix (compare)
KirillZubov on loss_each_equation
update broadcast (compare)
frankschae on concrete_solve_lss
fix passing a dg (compare)
@btime
is better for benchmark but @time
is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or I'm missing something more fundamental ?: https://files.slack.com/files-pri/T68168MUP-F021MUSQVUK/download/screenshot_20210512_010609.png
@btime
is better for benchmark but @time
is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives the similar results even after second run, is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives the similar results even after second run. Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives similar results even after second run. Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives similar results even after second run.solve
takes a lot of time when I declare the same ODESystem again ? Shoudnt it be compiled and fine after the first run ? Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives similar results even after second run.solve
takes more time with a newly declared problem than a already solved problem if they are exactly the same ? Shoudnt it be compiled and fine after the first run ? Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
@btime
is better for benchmark but @time
is expressing better the real time. It gives similar results even after second run.solve
takes more time with a newly declared problem than a already solved problem if they are exactly the same ? Shoudnt it be compiled and fine after the first run ? Is this related to how ModelingToolkit works or am I missing something more fundamental ?
remake
instead?
ModellingToolkit.jl
, is there a way for a solver algorithm to know about the ordering of the variables? I have a custom solver step that I need to inject, which needs to know which variable is which. To be more precise, I have a (N,3) problem, with 3 cartesian components and I want to use a normalized Euler-Heun step, such that the norm of the individual vectors is 1. This cannot be implemented with a Callback, because I need to normalize also the half-step inside the Euler-Heun algorithm. I have modellingtoolkit.jl
make a 3N problem, which works well, but in order to normalize I need to be sure that the ordering does not get switched around (or know the ordering). Is the ordering available inside perform_step!(integrator, cache)
?
[slack] <rveltz> The section Canard Cycles and their Explosion of http://www.scholarpedia.org/article/Canards is a good start.
but if you wanted to expand the tutorials, some very simple examples of e.g. bifurcation diagrams for simple models, would be useful
I added a section for ODE in the tutorials. The issue is that ODE requires parameters different from PDE. So I guess it would be simpler if I had a BifODEProblem and a BifPDEProblem and dispatch on those.
The real question is if you think this is too overwhelming :https://rveltz.github.io/BifurcationKit.jl/dev/tutorialsODE/#Branch-of-periodic-orbits-with-finite-differences-1
[slack] <rveltz> Code for canard explosion:
https://gist.github.com/rveltz/15f09eab9a325b069a3239be0713e803
You have to zoom a lot to see the explosion
Note that this is a crude computation. For canard explosion, you need an adaptive algo. Plus the period grows very quickly which is bad for backward euler. You could use a shooting to get better results (I guess).
remake
, I could create the problem but the solve failed. Is there any detailed tutorial there ?generate_function(odesys)
should it work faster ?: https://files.slack.com/files-pri/T68168MUP-F021KUML44V/download/screenshot_20210512_084903.png
f(u0, p, t)
needed to create any generic new OdeProblem or you are talking about some internal functions ?remake
the best solution if I only want to change parameters and initial state and run it again ?
ODEProblem
from MTK it creates a new function. You want to just do that once, and then just remake
with new parameters.
ODEProblem()
or before?
du = A*u
is the condition number of A a "measure" of the stiffness of the system?
states(sys)
vector.
[slack] <torkel.loman> That link is better yes, overlooked it. Yes, the PDE things might have some to do with it, I've never really worked much with PDE, so usually get out of my depth very quickly there.
Thanks for teh Canard code, its great. I get a AssertionError: The provided index does not refer to a Hopf Point
for the last thing (br_po, utrap = cotninuation(...
). It is after I switch the third argument from 3
to 2
, I will have a look to see if I can figure it out.
Also, what do you mean with there being a tiny paraemter in the model? Sounds like useful information!
[slack] <JeremyB> Hey ! I started using DifferentialsEquations recently and I have some small questions : I am trying to solve a simple ODE (a ball in a particular potential) but the potential I need to use is given to me as a .npy file. My workaround has been for now to interpolate the given array and compute the gradient in the derivative function.
```const tstart = 0.0
const tend = 1e-5
const tstep = 5e-8
trap = npzread("somefile.npy")
x = npzread("somefile2.npy")
z = npzread("somefile3.npy")
interpolated_potential = LinearInterpolation((x, z), trap,extrapolation_bc=0)
function evolve(dz,z,p,t)
p₁, p₂, q₁, q₂ = z[1], z[2], z[3], z[4]
dp₁,dp₂ = -Interpolations.gradient(interpolated_potential,q₁,q₂)
dq₁ = p₁
dq₂ = p₂
dz .= [dp₁, dp₂, dq₁, dq₂]
return nothing
end
probl_func = ODEFunction(evolve,syms=[:vx,:vy,:x,:y])
function condition(u,t,integrator)
u[3]>=x[end] || u[3]<=x[1] || u[4]>=z[end] || u[4]<=z[1]
end
function affect!(integrator)
terminate!(integrator)
end
cb = DiscreteCallback(condition,affect!)
u₀ = [randvelocity()...,randpos()...]
probl = ODEProblem(probl_func, u₀, (tstart, tend))
sol = solve(probl, Vern9(),dt=tstep, abstol=1e-6, reltol=1e-6,adaptive=false,callback = cb)```
However I run in some issues: When/If the ball is leaving the interpolated region, it throws an error. I tried to catch it with the callback, but for some reason, for some initial conditions it still crashes. Also I got a lot of allocations from the call to gradient in the derivative function it seems...
Is there a better way to deal with this problem ?
isoutofdomain
https://diffeq.sciml.ai/stable/basics/common_solver_opts/#Miscellaneous to prevent escape.
EnsembleProblem
with EnsemebleThreads
, it works fine simulating my specific problem up to ~~100 trajectories (on 4 threads), but if I push that further to for example ~~1000 trajectories, it is much, much slower (as in, much more than an order of magnitude slower that I'd expect to see in the worst case). I assume this is an issue of memory management (I think solving my problems are allocating a fair bit of memory). Has anyone seen this kind of behaviour, or know what I could do to resolve it with the least amount of effort? Should I maybe look into multi-processing instead of multi-threaded?
[slack] <Ricardo M. S. Rosa> Hi. I am having trouble using LsqFit.curve_fit
with and ODEProblem
.
Here is MWE:
```# Data
k = 0.5
x₀_data = [0.0:0.1:1.0...]
x_data = x₀_data * exp(k)
model_exp(x₀, β) = x₀ exp(β[1] 1.0)
function model_ode(x₀, β)
function dudt!(du, u, p, t)
du[1] = p[1] * u[1]
end
tspan = (0.0, 1.0)
prob = ODEProblem(dudt!, [x₀], tspan, β)
sol = solve(prob)
return sol(1.0)[1]
end
fit_exp = curve_fit(model_exp, x₀_data, x_data, [0.0])
fit_exp.param
fit_ode = curve_fit(model_ode, x₀_data, x_data, [0.0])
fit_ode.param``
The fit with
model_expworks just fine, but the second fit, with
model_ode`, gives me the error
MethodError: no method matching zero(::Type{Vector{Float64}})
Closest candidates are:
zero(::Union{Type{P}, P}) where P<:Dates.Period at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/Dates/src/periods.jl:53
zero(::AbstractAlgebra.MatrixElem{T} where T) at /Users/rrosa/.julia/packages/AbstractAlgebra/6JkeN/src/generic/Matrix.jl:232
zero(::AbstractAlgebra.MatrixElem{T} where T, ::AbstractAlgebra.Ring) at /Users/rrosa/.julia/packages/AbstractAlgebra/6JkeN/src/generic/Matrix.jl:232
...
Optim
and LsqFit
. I had to broadcast with curve_fit((x₀, β) -> model_ode.(x₀, β), x₀_data, x_data, [0.0])
to make it work. Thanks again!