Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 21:59
    akashkgarg synchronize #400
  • 21:34
    frankschae opened #423
  • 21:26

    frankschae on concrete_solve_lss

    fix signs (compare)

  • 20:52
    KirillZubov review_requested #321
  • 20:47
    KirillZubov commented #321
  • 20:05
    Vilin97 edited #131
  • 20:04
    Vilin97 synchronize #131
  • 20:03
    KirillZubov synchronize #321
  • 20:03

    KirillZubov on loss_each_equation

    fix (compare)

  • 19:57
    akashkgarg synchronize #400
  • 19:55
    akashkgarg synchronize #400
  • 19:52
    aml5600 closed #130
  • 19:40

    KirillZubov on loss_each_equation

    update broadcast (compare)

  • 19:40
    KirillZubov synchronize #321
  • 17:05
    edwinb-ai starred SciML/NeuralPDE.jl
  • 16:56

    frankschae on concrete_solve_lss

    fix passing a dg (compare)

  • 16:23
    mathias7777 starred SciML/BoundaryValueDiffEq.jl
  • 16:10
    pulsipher commented #4
  • 15:52
    YingboMa commented #407
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> Is there a way to access the integrator from a ManifoldProjection callback? Could be useful to have access to f if it's a callable struct with relevant fields... but then again I guess g could also be a callable struct.
BridgingBot
@GitterIRCbot
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or I'm missing something more fundamental ?: https://files.slack.com/files-pri/T68168MUP-F021MUSQVUK/download/screenshot_20210512_010609.png
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the similar results even after second run, is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the similar results even after second run. Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives similar results even after second run. Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
BridgingBot
@GitterIRCbot
[slack] <wilwxk> I know that @btime is better for benchmark but @time is expressing better the real time. It gives similar results even after second run.
Why solve takes a lot of time when I declare the same ODESystem again ? Shoudnt it be compiled and fine after the first run ? Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> I know that @btime is better for benchmark but @time is expressing better the real time. It gives similar results even after second run.
Why solve takes more time with a newly declared problem than a already solved problem if they are exactly the same ? Shoudnt it be compiled and fine after the first run ? Is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> I know that @btime is better for benchmark but @time is expressing better the real time. It gives similar results even after second run.
Why solve takes more time with a newly declared problem than a already solved problem if they are exactly the same ? Shoudnt it be compiled and fine after the first run ? Is this related to how ModelingToolkit works or am I missing something more fundamental ?
BridgingBot
@GitterIRCbot
[slack] <mason.protter> Given that we have no idea what the code is that you're running, anything anyone can say would be a shot-in-the-dark guess
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> I'm not 100% sure, but I think ModelingToolkit does some stuff under the hood to convert the ODESystem into an ODEFunction. I wouldn't be surprised if it has to create runtime generated functions, which would be invalidated and recompiled on each construction of ODEProblem. Have you tried using remake instead?
BridgingBot
@GitterIRCbot
[slack] <Pieter Gunnink> When using ModellingToolkit.jl , is there a way for a solver algorithm to know about the ordering of the variables? I have a custom solver step that I need to inject, which needs to know which variable is which. To be more precise, I have a (N,3) problem, with 3 cartesian components and I want to use a normalized Euler-Heun step, such that the norm of the individual vectors is 1. This cannot be implemented with a Callback, because I need to normalize also the half-step inside the Euler-Heun algorithm. I have modellingtoolkit.jl make a 3N problem, which works well, but in order to normalize I need to be sure that the ordering does not get switched around (or know the ordering). Is the ordering available inside perform_step!(integrator, cache)?
BridgingBot
@GitterIRCbot

[slack] <rveltz> The section Canard Cycles and their Explosion of http://www.scholarpedia.org/article/Canards is a good start.

but if you wanted to expand the tutorials, some very simple examples of e.g. bifurcation diagrams for simple models, would be useful
I added a section for ODE in the tutorials. The issue is that ODE requires parameters different from PDE. So I guess it would be simpler if I had a BifODEProblem and a BifPDEProblem and dispatch on those.

The real question is if you think this is too overwhelming :https://rveltz.github.io/BifurcationKit.jl/dev/tutorialsODE/#Branch-of-periodic-orbits-with-finite-differences-1

BridgingBot
@GitterIRCbot

[slack] <rveltz> Code for canard explosion:
https://gist.github.com/rveltz/15f09eab9a325b069a3239be0713e803

You have to zoom a lot to see the explosion

Note that this is a crude computation. For canard explosion, you need an adaptive algo. Plus the period grows very quickly which is bad for backward euler. You could use a shooting to get better results (I guess).

BridgingBot
@GitterIRCbot
[slack] <rveltz> There must be a tiny parameter in your model
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> MTK is building new functions. You don't want to recompile each time.
[slack] <chrisrackauckas> Use the varmap_to_vars
BridgingBot
@GitterIRCbot
[slack] <wilwxk> <@U8D9768Q6> Im testing the <https://github.com/SciML/ModelingToolkit.jl|second example> in the readme of modelingtoolkit.
<@U01H36BUDJB> I tested with remake , I could create the problem but the solve failed. Is there any detailed tutorial there ?
So, its creating the differential equation system from the model again ? If I extract the function with generate_function(odesys) should it work faster ?: https://files.slack.com/files-pri/T68168MUP-F021KUML44V/download/screenshot_20210512_084903.png
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> I don't get your question
BridgingBot
@GitterIRCbot
[slack] <wilwxk> I just didn't understand the part of MKT building new functions, you are talking about the function f(u0, p, t) needed to create any generic new OdeProblem or you are talking about some internal functions ?
And to make it faster, is remake the best solution if I only want to change parameters and initial state and run it again ?
[slack] <chrisrackauckas> Yes, when you build a new ODEProblem from MTK it creates a new function. You want to just do that once, and then just remake with new parameters.
BridgingBot
@GitterIRCbot
[slack] <wilwxk> Got it, just one more question, when I save f = eval(generate_function(odesys)[2]) and use it with ODEProblem repeatedly, is it creating a new raw f(u0, p, t) and bypassing the MTK build time ?
[slack] <chrisrackauckas> that would bypass it
BridgingBot
@GitterIRCbot
[slack] <Pieter Gunnink> Thanks, I see it now. When is the ordering determined actually? When you run ODEProblem() or before?
[slack] <Andrew Leonard> for a linear system du = A*u is the condition number of A a "measure" of the stiffness of the system?
[slack] <chrisrackauckas> yes
[slack] <chrisrackauckas> See https://www.youtube.com/watch?v=FENK1SDvPiA for a much more precise discussion of the property.
[slack] <chrisrackauckas> It's determined before, by the ordering of the symbolic states(sys) vector.
[slack] <Andrew Leonard> awesome, thanks!
BridgingBot
@GitterIRCbot
[slack] <Pieter Gunnink> I see, thanks!
BridgingBot
@GitterIRCbot

[slack] <torkel.loman> That link is better yes, overlooked it. Yes, the PDE things might have some to do with it, I've never really worked much with PDE, so usually get out of my depth very quickly there.

Thanks for teh Canard code, its great. I get a AssertionError: The provided index does not refer to a Hopf Point for the last thing (br_po, utrap = cotninuation(...). It is after I switch the third argument from 3 to 2, I will have a look to see if I can figure it out.

Also, what do you mean with there being a tiny paraemter in the model? Sounds like useful information!

BridgingBot
@GitterIRCbot

[slack] <JeremyB> Hey ! I started using DifferentialsEquations recently and I have some small questions : I am trying to solve a simple ODE (a ball in a particular potential) but the potential I need to use is given to me as a .npy file. My workaround has been for now to interpolate the given array and compute the gradient in the derivative function.
```const tstart = 0.0
const tend = 1e-5
const tstep = 5e-8

trap = npzread("somefile.npy")
x = npzread("somefile2.npy")
z = npzread("somefile3.npy")

linear interpolation

interpolated_potential = LinearInterpolation((x, z), trap,extrapolation_bc=0)

function evolve(dz,z,p,t)
p₁, p₂, q₁, q₂ = z[1], z[2], z[3], z[4]

dp₁,dp₂ = -Interpolations.gradient(interpolated_potential,q₁,q₂)
dq₁ = p₁
dq₂ = p₂
dz .= [dp₁, dp₂, dq₁, dq₂]
return nothing

end

probl_func = ODEFunction(evolve,syms=[:vx,:vy,:x,:y])

function condition(u,t,integrator)
u[3]>=x[end] || u[3]<=x[1] || u[4]>=z[end] || u[4]<=z[1]
end

function affect!(integrator)
terminate!(integrator)
end

cb = DiscreteCallback(condition,affect!)

u₀ = [randvelocity()...,randpos()...]
probl = ODEProblem(probl_func, u₀, (tstart, tend))
sol = solve(probl, Vern9(),dt=tstep, abstol=1e-6, reltol=1e-6,adaptive=false,callback = cb)```
However I run in some issues: When/If the ball is leaving the interpolated region, it throws an error. I tried to catch it with the callback, but for some reason, for some initial conditions it still crashes. Also I got a lot of allocations from the call to gradient in the derivative function it seems...
Is there a better way to deal with this problem ?

BridgingBot
@GitterIRCbot
[slack] <contradict> You might try isoutofdomain https://diffeq.sciml.ai/stable/basics/common_solver_opts/#Miscellaneous to prevent escape.
BridgingBot
@GitterIRCbot
[slack] <contradict> What error do you get when it does escape?
BridgingBot
@GitterIRCbot
[slack] <Dan Padilha> When using EnsembleProblem with EnsemebleThreads, it works fine simulating my specific problem up to ~~100 trajectories (on 4 threads), but if I push that further to for example ~~1000 trajectories, it is much, much slower (as in, much more than an order of magnitude slower that I'd expect to see in the worst case). I assume this is an issue of memory management (I think solving my problems are allocating a fair bit of memory). Has anyone seen this kind of behaviour, or know what I could do to resolve it with the least amount of effort? Should I maybe look into multi-processing instead of multi-threaded?
BridgingBot
@GitterIRCbot
[slack] <rveltz> the mistake is because br.bifpoint[2] is not a Hopf point
[slack] <rveltz> > Also, what do you mean with there being a tiny paraemter in the model?
That’s my feeling from the simulation. However sometimes, this is burried in. the ODE. and requires specific investigation
BridgingBot
@GitterIRCbot
[slack] <Simon Welker> I just had this happen as well -- to me it looks like I ran out of physical memory going from 1k to 10k trajectories, so my system was constantly swapping. I got around it by using dense=false, which saved enough memory for it to work for me
[slack] <Simon Welker> not sure if it's the same cause for you -- just something to try
BridgingBot
@GitterIRCbot
[slack] <Dan Padilha> Thanks @Simon Welker, I think you're right about the cause (although unfortunately in my case I need to keep the dense interpolation). 😞
[slack] <Simon Welker> if you're reducing the data after the ensemble simulation, maybe solving this in batches could help
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> yes, use batch_size and reduction in effective ways to keep the memory down.
BridgingBot
@GitterIRCbot

[slack] <Ricardo M. S. Rosa> Hi. I am having trouble using LsqFit.curve_fit with and ODEProblem.

Here is MWE:
```# Data
k = 0.5
x₀_data = [0.0:0.1:1.0...]
x_data = x₀_data * exp(k)

Models

model_exp(x₀, β) = x₀ exp(β[1] 1.0)

function model_ode(x₀, β)
function dudt!(du, u, p, t)
du[1] = p[1] * u[1]
end
tspan = (0.0, 1.0)
prob = ODEProblem(dudt!, [x₀], tspan, β)
sol = solve(prob)
return sol(1.0)[1]
end

Ajuste model_exp

fit_exp = curve_fit(model_exp, x₀_data, x_data, [0.0])
fit_exp.param

Ajuste model_ode

fit_ode = curve_fit(model_ode, x₀_data, x_data, [0.0])
fit_ode.param`` The fit withmodel_expworks just fine, but the second fit, withmodel_ode`, gives me the error

MethodError: no method matching zero(::Type{Vector{Float64}}) Closest candidates are: zero(::Union{Type{P}, P}) where P<:Dates.Period at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/Dates/src/periods.jl:53 zero(::AbstractAlgebra.MatrixElem{T} where T) at /Users/rrosa/.julia/packages/AbstractAlgebra/6JkeN/src/generic/Matrix.jl:232 zero(::AbstractAlgebra.MatrixElem{T} where T, ::AbstractAlgebra.Ring) at /Users/rrosa/.julia/packages/AbstractAlgebra/6JkeN/src/generic/Matrix.jl:232 ...

[slack] <chrisrackauckas> LsqFit is generally pretty bad... just use Optim instead 😉
[slack] <chrisrackauckas> But see how it's implemented in DiffEqParamEstim
[slack] <Ricardo M. S. Rosa> Great, thanks! I will try with Optim!
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> Did the UDE paper get published yet? And/or where is it in review?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> I have been sitting on the revisions for months because other papers have been getting written. Thanks for the reminder 😅
[slack] <Ricardo M. S. Rosa> Ok, I got both working, Optim and LsqFit. I had to broadcast with curve_fit((x₀, β) -> model_ode.(x₀, β), x₀_data, x_data, [0.0]) to make it work. Thanks again!