Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 21:59
    akashkgarg synchronize #400
  • 21:34
    frankschae opened #423
  • 21:26

    frankschae on concrete_solve_lss

    fix signs (compare)

  • 20:52
    KirillZubov review_requested #321
  • 20:47
    KirillZubov commented #321
  • 20:05
    Vilin97 edited #131
  • 20:04
    Vilin97 synchronize #131
  • 20:03
    KirillZubov synchronize #321
  • 20:03

    KirillZubov on loss_each_equation

    fix (compare)

  • 19:57
    akashkgarg synchronize #400
  • 19:55
    akashkgarg synchronize #400
  • 19:52
    aml5600 closed #130
  • 19:40

    KirillZubov on loss_each_equation

    update broadcast (compare)

  • 19:40
    KirillZubov synchronize #321
  • 17:05
    edwinb-ai starred SciML/NeuralPDE.jl
  • 16:56

    frankschae on concrete_solve_lss

    fix passing a dg (compare)

  • 16:23
    mathias7777 starred SciML/BoundaryValueDiffEq.jl
  • 16:10
    pulsipher commented #4
  • 15:52
    YingboMa commented #407
BridgingBot
@GitterIRCbot
[slack] <wilwxk> <@U8D9768Q6> Im testing the <https://github.com/SciML/ModelingToolkit.jl|second example> in the readme of modelingtoolkit.
<@U01H36BUDJB> I tested with remake , I could create the problem but the solve failed. Is there any detailed tutorial there ?
So, its creating the differential equation system from the model again ? If I extract the function with generate_function(odesys) should it work faster ?: https://files.slack.com/files-pri/T68168MUP-F021KUML44V/download/screenshot_20210512_084903.png
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> I don't get your question
BridgingBot
@GitterIRCbot
[slack] <wilwxk> I just didn't understand the part of MKT building new functions, you are talking about the function f(u0, p, t) needed to create any generic new OdeProblem or you are talking about some internal functions ?
And to make it faster, is remake the best solution if I only want to change parameters and initial state and run it again ?
[slack] <chrisrackauckas> Yes, when you build a new ODEProblem from MTK it creates a new function. You want to just do that once, and then just remake with new parameters.
BridgingBot
@GitterIRCbot
[slack] <wilwxk> Got it, just one more question, when I save f = eval(generate_function(odesys)[2]) and use it with ODEProblem repeatedly, is it creating a new raw f(u0, p, t) and bypassing the MTK build time ?
[slack] <chrisrackauckas> that would bypass it
BridgingBot
@GitterIRCbot
[slack] <Pieter Gunnink> Thanks, I see it now. When is the ordering determined actually? When you run ODEProblem() or before?
[slack] <Andrew Leonard> for a linear system du = A*u is the condition number of A a "measure" of the stiffness of the system?
[slack] <chrisrackauckas> yes
[slack] <chrisrackauckas> See https://www.youtube.com/watch?v=FENK1SDvPiA for a much more precise discussion of the property.
[slack] <chrisrackauckas> It's determined before, by the ordering of the symbolic states(sys) vector.
[slack] <Andrew Leonard> awesome, thanks!
BridgingBot
@GitterIRCbot
[slack] <Pieter Gunnink> I see, thanks!
BridgingBot
@GitterIRCbot

[slack] <torkel.loman> That link is better yes, overlooked it. Yes, the PDE things might have some to do with it, I've never really worked much with PDE, so usually get out of my depth very quickly there.

Thanks for teh Canard code, its great. I get a AssertionError: The provided index does not refer to a Hopf Point for the last thing (br_po, utrap = cotninuation(...). It is after I switch the third argument from 3 to 2, I will have a look to see if I can figure it out.

Also, what do you mean with there being a tiny paraemter in the model? Sounds like useful information!

BridgingBot
@GitterIRCbot

[slack] <JeremyB> Hey ! I started using DifferentialsEquations recently and I have some small questions : I am trying to solve a simple ODE (a ball in a particular potential) but the potential I need to use is given to me as a .npy file. My workaround has been for now to interpolate the given array and compute the gradient in the derivative function.
```const tstart = 0.0
const tend = 1e-5
const tstep = 5e-8

trap = npzread("somefile.npy")
x = npzread("somefile2.npy")
z = npzread("somefile3.npy")

linear interpolation

interpolated_potential = LinearInterpolation((x, z), trap,extrapolation_bc=0)

function evolve(dz,z,p,t)
p₁, p₂, q₁, q₂ = z[1], z[2], z[3], z[4]

dp₁,dp₂ = -Interpolations.gradient(interpolated_potential,q₁,q₂)
dq₁ = p₁
dq₂ = p₂
dz .= [dp₁, dp₂, dq₁, dq₂]
return nothing

end

probl_func = ODEFunction(evolve,syms=[:vx,:vy,:x,:y])

function condition(u,t,integrator)
u[3]>=x[end] || u[3]<=x[1] || u[4]>=z[end] || u[4]<=z[1]
end

function affect!(integrator)
terminate!(integrator)
end

cb = DiscreteCallback(condition,affect!)

u₀ = [randvelocity()...,randpos()...]
probl = ODEProblem(probl_func, u₀, (tstart, tend))
sol = solve(probl, Vern9(),dt=tstep, abstol=1e-6, reltol=1e-6,adaptive=false,callback = cb)```
However I run in some issues: When/If the ball is leaving the interpolated region, it throws an error. I tried to catch it with the callback, but for some reason, for some initial conditions it still crashes. Also I got a lot of allocations from the call to gradient in the derivative function it seems...
Is there a better way to deal with this problem ?

BridgingBot
@GitterIRCbot
[slack] <contradict> You might try isoutofdomain https://diffeq.sciml.ai/stable/basics/common_solver_opts/#Miscellaneous to prevent escape.
BridgingBot
@GitterIRCbot
[slack] <contradict> What error do you get when it does escape?
BridgingBot
@GitterIRCbot
[slack] <Dan Padilha> When using EnsembleProblem with EnsemebleThreads, it works fine simulating my specific problem up to ~~100 trajectories (on 4 threads), but if I push that further to for example ~~1000 trajectories, it is much, much slower (as in, much more than an order of magnitude slower that I'd expect to see in the worst case). I assume this is an issue of memory management (I think solving my problems are allocating a fair bit of memory). Has anyone seen this kind of behaviour, or know what I could do to resolve it with the least amount of effort? Should I maybe look into multi-processing instead of multi-threaded?
BridgingBot
@GitterIRCbot
[slack] <rveltz> the mistake is because br.bifpoint[2] is not a Hopf point
[slack] <rveltz> > Also, what do you mean with there being a tiny paraemter in the model?
That’s my feeling from the simulation. However sometimes, this is burried in. the ODE. and requires specific investigation
BridgingBot
@GitterIRCbot
[slack] <Simon Welker> I just had this happen as well -- to me it looks like I ran out of physical memory going from 1k to 10k trajectories, so my system was constantly swapping. I got around it by using dense=false, which saved enough memory for it to work for me
[slack] <Simon Welker> not sure if it's the same cause for you -- just something to try
BridgingBot
@GitterIRCbot
[slack] <Dan Padilha> Thanks @Simon Welker, I think you're right about the cause (although unfortunately in my case I need to keep the dense interpolation). 😞
[slack] <Simon Welker> if you're reducing the data after the ensemble simulation, maybe solving this in batches could help
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> yes, use batch_size and reduction in effective ways to keep the memory down.
BridgingBot
@GitterIRCbot

[slack] <Ricardo M. S. Rosa> Hi. I am having trouble using LsqFit.curve_fit with and ODEProblem.

Here is MWE:
```# Data
k = 0.5
x₀_data = [0.0:0.1:1.0...]
x_data = x₀_data * exp(k)

Models

model_exp(x₀, β) = x₀ exp(β[1] 1.0)

function model_ode(x₀, β)
function dudt!(du, u, p, t)
du[1] = p[1] * u[1]
end
tspan = (0.0, 1.0)
prob = ODEProblem(dudt!, [x₀], tspan, β)
sol = solve(prob)
return sol(1.0)[1]
end

Ajuste model_exp

fit_exp = curve_fit(model_exp, x₀_data, x_data, [0.0])
fit_exp.param

Ajuste model_ode

fit_ode = curve_fit(model_ode, x₀_data, x_data, [0.0])
fit_ode.param`` The fit withmodel_expworks just fine, but the second fit, withmodel_ode`, gives me the error

MethodError: no method matching zero(::Type{Vector{Float64}}) Closest candidates are: zero(::Union{Type{P}, P}) where P<:Dates.Period at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/Dates/src/periods.jl:53 zero(::AbstractAlgebra.MatrixElem{T} where T) at /Users/rrosa/.julia/packages/AbstractAlgebra/6JkeN/src/generic/Matrix.jl:232 zero(::AbstractAlgebra.MatrixElem{T} where T, ::AbstractAlgebra.Ring) at /Users/rrosa/.julia/packages/AbstractAlgebra/6JkeN/src/generic/Matrix.jl:232 ...

[slack] <chrisrackauckas> LsqFit is generally pretty bad... just use Optim instead 😉
[slack] <chrisrackauckas> But see how it's implemented in DiffEqParamEstim
[slack] <Ricardo M. S. Rosa> Great, thanks! I will try with Optim!
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> Did the UDE paper get published yet? And/or where is it in review?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> I have been sitting on the revisions for months because other papers have been getting written. Thanks for the reminder 😅
[slack] <Ricardo M. S. Rosa> Ok, I got both working, Optim and LsqFit. I had to broadcast with curve_fit((x₀, β) -> model_ode.(x₀, β), x₀_data, x_data, [0.0]) to make it work. Thanks again!
BridgingBot
@GitterIRCbot
[slack] <torkel.loman> More fundamental, what is a "tiny parameter". From your statement, it seems like it is a term with a specific meaning (more than what is in the word). I've never heard the expression though, so if there's something to it it would be useful to know! (especially if my model contains one of these...)
BridgingBot
@GitterIRCbot

[slack] <chrisrackauckas> @yingbo_ma @isaacsas @SebastianM-C I added a help portion in the new benchmarks for how to see the artifacts in the PR:

https://github.com/SciML/SciMLBenchmarks.jl#inspecting-benchmark-results

BridgingBot
@GitterIRCbot
[slack] <SebastianM-C> @chrisrackauckas Are there any tutorials / resources for using buildkite with julia like you have in SciMLBenchmarks or was it a custom made thing?
[slack] <chrisrackauckas> It was a custom made thing
[slack] <chrisrackauckas> Though I want to get it packaged up and used on SciMLTutorials, TuringTutorials, FluxBench, etc.
BridgingBot
@GitterIRCbot
[zulip] <Brenhin Keller> General diffeq question: what'd be the best way in Julia to solve the diffusion equation in 2 or more dimensions?
a la https://upload.wikimedia.org/wikipedia/commons/0/01/Heat.gif
[slack] <chrisrackauckas> with some of the sparsity handling details in https://diffeq.sciml.ai/stable/tutorials/advanced_ode_example/#Automatic-Sparsity-Detection
[slack] <chrisrackauckas> all of those examples are reaction-diffusion equations.
BridgingBot
@GitterIRCbot
[slack] <cbkeller> Cool, thanks!
BridgingBot
@GitterIRCbot
[slack] <ashton.bradley> is there a new issue with Robin b.c.? The example errors: SciML/DiffEqOperators.jl#396
BridgingBot
@GitterIRCbot
[slack] <Mattias Fält> Has anyone worked on tracking discontinuities per state in DelayDiffEq.jl ? I'm thinking for the purpose where only some of the equations are neutral. It could maybe also make it possible to stimulate and track for example impulses
BridgingBot
@GitterIRCbot
[slack] <devmotion> Not that I know of. A DDE is either declared as neutral or not, there's no distinction between different states.
[slack] <devmotion> I don't know how common it is that the system is so decoupled that discontinuities solely affect a subset of states and don't propgagate to other states.
[slack] <devmotion> If you can decompose the system I guess you might be able to solve the decoupled systems independently?
[slack] <Mattias Fält> We have a usecase in ControlSystems.jl where we simulate a linear system with internal delays as an interconnection of a linear system and pure delay terms. In that case it's not too common that the delayed signal is fed back into itself
BridgingBot
@GitterIRCbot
[slack] <Mattias Fält> I.e something like (where we have to do some trickery to get into DelayDiffEq)
x(t)' = -x(t) + d1(t-t1)
d1(t) = x(t) + d2(t-t2)
d2(t) = -x(t) + u(t)
Where u is something with a discontinuity at t=0.
[slack] <Mattias Fält> The realizations are usually such that the states are smooth after a while, and I assume that would speed up the solvers