Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> the other thing is that, StochasticDiffEq had a regression show up in the latest benchmarks and this might be an indicator of that.
[slack] <krishnab> Oh I see, so this kind of issue is the real deal. It is nice to see the places where there is work to be done 🙂. Thanks for the extended technical discussion @Brian Groenke and @chrisrackauckas. I can see the challenge of trying to adjust the solver to stiffness in both the time and spatial domains. I hope you find someone good who can work on it.
BridgingBot
@GitterIRCbot
[slack] <Simon Welker> @frankschae My overall goal is just avoiding allocations wherever I can (and wherever it's useful). There's no particular need for me to use static arrays for noise (=return values of g?) while using normal arrays for the states (=return values of f?). I'm currently just trying to use static arrays as u0 so the solution matrix can be preallocated (so I'd save the nsteps allocations, replacing them by one larger allocation).
[slack] <chrisrackauckas> are the allocations effecting the time here?
[slack] <Simon Welker> hmm, I'm not sure. I'd assume so esp. since I want to run this as a large ensemble simulation (10k+). Is there a way for me to tell, without running code for a fully nonallocating version?
[slack] <chrisrackauckas> Profile it
BridgingBot
@GitterIRCbot
[slack] <Simon Welker> cheers, I'll try that soon. I was so far only looking at @btime
[slack] <Simon Welker> I guess even if it isn't worth it for my problem, generally allowing for this preallocation (e.g. via SArrays) could be useful for other problems?
[slack] <Simon Welker> (ah it's from 18337 -- I'm only on lecture 7 so far :D)
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> you can pre-allocate by passing in more arguments, but it's undocumented because I want to change and improve that interface.
BridgingBot
@GitterIRCbot
[slack] <Jonnie> LabelledArrays and ComponentArrays have some performance regressions for some of my test problems after the DiffEqBC->FastBroadcast switch. Writing a method for FastBroadcast.use_fast_broadcast fixes it, though. Is this something I should be opting into for ComponentArrays? Or is this going to be handled on the DifferentialEquations side?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> @yingbo_ma @elrodc how do we recover the opt-ins that were in
[slack] <chrisrackauckas> DiffEqBase?
[slack] <chrisrackauckas> and ComponentArrays had one
[slack] <yingbo_ma> use_fast_broadcast is not safe, so setting it to true by the developer manually would be good.
[slack] <yingbo_ma> I don’t think we should do opt-in
[slack] <chrisrackauckas> what about relying on the fast_scalar_indexing trait?
BridgingBot
@GitterIRCbot

[slack] <yingbo_ma> ```julia> a = rand(5); b = @views a[1:2:end];

julia> ArrayInterface.fast_scalar_indexing(b)
true```

[slack] <yingbo_ma> Not really.
[slack] <yingbo_ma> Also, we don’t know if the broadcast has fast scalar index
[slack] <yingbo_ma> I am not sure if there’s a good proxy for use_fast_broadcast . This decision should be done by the developer of the package.
BridgingBot
@GitterIRCbot
[slack] <yingbo_ma> Also, broadcast is very customizable, and FastBroadcast is very rigid. We should not just override custom broadcasting behavior in @..
BridgingBot
@GitterIRCbot
[slack] <elrodc> I think @.. on b should be fine, even if it's not a fast_scalar_indexing?
[slack] <elrodc> I agree though that we don't want to override custom broadcasting behavior though. If someone went through the trouble of customizing it for SparseArrays or StaticArrays, odds are we should stick with their implementation.
BridgingBot
@GitterIRCbot
[slack] <Dan Padilha> Is there a standard or easy way to store and extract the times (or timestep indices) when callbacks get triggered, and have this available in the resulting ODESolution?
BridgingBot
@GitterIRCbot
[slack] <Lütfullah Tomak> What is integ.uprev2 stands for in integrators? When I use Dual numbers, I want mostly partials with event boundaries are regarded but sometimes I need nondirectional partials at the event location too. Can I assume integ.uprev2 is the state without nondirectional partials? My quick check says it looks like but I want to confirm.
BridgingBot
@GitterIRCbot
[slack] <Lütfullah Tomak> I cannot see anywhere they are stored. You can add an empty vector to parameters than each time you fall into an affect! push the time of the event to this vector.
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> It's a second timepoint back used in some extrapolation schemes
[slack] <chrisrackauckas> It matches the type of u
[slack] <iv.borissov> Hi, @chrisrackauckas. Could you have a look at the issue, when you have time https://github.com/SciML/DiffEqCallbacks.jl/issues/30#issuecomment-828788228 The topic is old and I know there is no general solution for SavingCallback+Event , but maybe you can figure out what is the issue with the proposed workaround
BridgingBot
@GitterIRCbot
[slack] <Lütfullah Tomak> Thanks @chrisrackauckas. I have seen it wrong, it is not what I want.
BridgingBot
@GitterIRCbot
[slack] <Dan Padilha> Yeah that's one way, thanks. I feel like there should be something built in for this though. I ended up just tracking the time index in affect! by updating my integrator.u (which is a DEDataVector tracking some other data as well, so essentially integrator.u.event_trigger_idx = length(integrator.sol.t) ) and just taking the unique values in the resulting [u.event_trigger_idx for u in solution].
BridgingBot
@GitterIRCbot
[slack] <atiyo ghosh> Are there any Julia packages of choice for differential equation solutions via spectral methods? https://github.com/SciML/DiffEqApproxFun.jl is no longer in active development. I was looking at https://github.com/DedalusProject/dedalus in Python land and thinking that something similar in Julia would be awesome.
[slack] <Lütfullah Tomak> How the continuous callback conditions is stored? I have a callback set with some continuous callbacks but their condition functions differ in returning value type. Does this cause type unstability or hurt performance?
BridgingBot
@GitterIRCbot
[slack] <atiyo ghosh> Actually ApproxFun seems pretty well-featured.
JeremyB
@jrmbr

Hey ! I started using DifferentialsEquations recently and I have some small questions : I am trying to solve a simple ODE (a ball in a particular potential) but the potential I need to use is given to me as a .npy file. My workaround has been for now to interpolate the given array and compute the gradient in the derivative function.

const tstart = 0.0
const tend = 1e-5
const tstep = 5e-8

trap = npzread("somefile.npy")
x = npzread("somefile2.npy")
z = npzread("somefile3.npy")

# linear interpolation
interpolated_potential = LinearInterpolation((x, z), trap,extrapolation_bc=0)

function evolve(dz,z,p,t)
    p₁, p₂, q₁, q₂ = z[1], z[2], z[3], z[4]

    dp₁,dp₂ = -Interpolations.gradient(interpolated_potential,q₁,q₂)
    dq₁ = p₁
    dq₂ = p₂
    dz .= [dp₁, dp₂, dq₁, dq₂]
    return nothing
end

probl_func = ODEFunction(evolve,syms=[:vx,:vy,:x,:y])

function condition(u,t,integrator)
    u[3]>=x[end] || u[3]<=x[1] || u[4]>=z[end] || u[4]<=z[1]
end

function affect!(integrator)
    terminate!(integrator)
end

cb = DiscreteCallback(condition,affect!)

u₀ = [randvelocity()...,randpos()...]
probl = ODEProblem(probl_func, u₀, (tstart, tend))
sol = solve(probl, Vern9(),dt=tstep, abstol=1e-6, reltol=1e-6,adaptive=false,callback = cb)

However I run in some issues: When/If the ball is leaving the interpolated region, it throws an error. I tried to catch it with the callback, but for some reason, for some initial conditions it still crashes. Also I got a lot of allocations from the call to gradient in the derivative function it seems...

Is there a better way to deal with this problem ?

BridgingBot
@GitterIRCbot

[slack] <torkel.loman> That would be cool, never actually heard of canard explosions, but if you have the code for it and that model I'd love to check it out. Tried to figure out what they were, and had a look at this one: http://www.scholarpedia.org/article/Canards Is that a good summary of the phenomena (or do you have some other reference you'd recommend)?

Concerning the API. I think the probably is that it is quite heavy, especially for someone who is not very familiar with this kind of stuff. Generally I often have this problem with the BifurcationKit docs (I guess one problem is that this is inherently not that simple, and that it is a powerful package with rather large capabilities). Things sometimes gets rather overwhelming, e.g. following https://rveltz.github.io/BifurcationKit.jl/dev/tutorials3/#Brusselator-1d-(automatic)-1 to get some help with periodic orbits, there's a lot of stuff there! Initially I was just looking for something very simple (going from a system function+jacobian and a parameter range, to a plot of the bifurcation points and the periodic orbits). But looking through that tutorial, I'm not even really sure what all the plots mean, and what their output are (in this case I'm more interested in the non-spatial case, which probably is a bit different).

I wouldn't be to worried, the package is awesome, and all of the information is there. It is just that in cases like this, it was just easier not to go ahead and make the periodic orbits, rather than trying to tackle the documentation (but if you wanted to expand the tutorials, some very simple examples of e.g. bifurcation diagrams for simple models, would be useful).

BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> ApproxFun is well-featured and you can just use it with DiffEq without the connection package now.
[slack] <chrisrackauckas> it won't store it, that should be fine.
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> I have a customized Newton's method implementation that I made to resolve the temperature/enthalpy conservation law for freezing curves... and I had to add a backtracking line search to make it work (have to avoid jumping over the discontinuity), so I figured the same principle could/should be applied to an implicit integrator.
[slack] <chrisrackauckas> it could/should
[slack] <chrisrackauckas> I was just talking to @yingbo_ma about SciML/OrdinaryDiffEq.jl#1399 the other day./
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> Is there a way to access the integrator from a ManifoldProjection callback? Could be useful to have access to f if it's a callable struct with relevant fields... but then again I guess g could also be a callable struct.
BridgingBot
@GitterIRCbot
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or I'm missing something more fundamental ?: https://files.slack.com/files-pri/T68168MUP-F021MUSQVUK/download/screenshot_20210512_010609.png
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the same results always, is this something related to how ModelingToolkit works or am I missing something more fundamental ?
[slack] <wilwxk> Can someone explain me what is happening here ?
I know that @btime is better for benchmark but @time is expressing better the real time. It gives the similar results even after second run, is this something related to how ModelingToolkit works or am I missing something more fundamental ?