Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 10:20
    yewalenikhil65 commented #422
  • 10:20
    yewalenikhil65 edited #422
  • 10:19
    yewalenikhil65 commented #422
  • 10:18
    yewalenikhil65 commented #422
  • 09:54
    yewalenikhil65 edited #422
  • 09:53
    yewalenikhil65 opened #422
  • 06:03
    rafael-guerra-www starred SciML/DifferentialEquations.jl
  • 01:09

    github-actions[bot] on v1.6.5

    (compare)

  • 01:08
    JuliaTagBot commented #89
  • 00:53
    ChrisRackauckas commented #103
  • 00:52
    JuliaRegistrator commented #64
  • 00:52
    ChrisRackauckas commented #64
  • 00:52

    ChrisRackauckas on master

    Update Project.toml (compare)

  • 00:51

    ChrisRackauckas on stiff

    (compare)

  • 00:51

    ChrisRackauckas on master

    Make it easier to use SLArrays … Merge pull request #104 from Sc… (compare)

  • 00:51
    ChrisRackauckas closed #104
  • 00:51
    ChrisRackauckas closed #103
  • 00:42
    ChrisRackauckas opened #104
  • 00:42

    ChrisRackauckas on stiff

    Make it easier to use SLArrays … (compare)

BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> I think we need integration on the MTK side
[slack] <Brian Groenke> Ok, I'll make an MTK issue.
[slack] <chrisrackauckas> @frankschae do we have tests on this?
[slack] <Brian Groenke> Is there a reason why ODESystem uses the Vector type specifically and not something more generic?
[slack] <chrisrackauckas> It doesn't need to use something more generic because it's generating the code
[slack] <chrisrackauckas> so it uses something that is compatible with most linear algebra routines.
[slack] <chrisrackauckas> modelingtoolkitize could be made more generic though
BridgingBot
@GitterIRCbot
[slack] <Brian Groenke> SciML/ModelingToolkit.jl#1009
BridgingBot
@GitterIRCbot
[slack] <frankschae> @Simon Welker Did you rewrite f and g to the oop form? I guess we could have some more tests.. There is https://github.com/SciML/StochasticDiffEq.jl/blob/fec328e1bff42dd2faea226a2a7e4fc910756b9b/test/static_array_tests.jl#L23. Maybe actually @isaacsas knows better (There was this issue SciML/StochasticDiffEq.jl#365 with a fix merged on static arrays in DiffEqNoiseProcess).
BridgingBot
@GitterIRCbot
[slack] <isaacsas> No idea on that error. There was still something funny going on though since there seemed (to me) to be too many allocations even with that fix (though it helped a lot). Somewhere regular arrays were being allocated each step I think even with StaticArrays.
BridgingBot
@GitterIRCbot
[slack] <Simon Welker> @frankschae no I haven't rewritten it as oop, should I? I can do that tomorrow if it's useful for testing purposes. But doing that will overall make for the same number of allocations, just in another part of the program, right?
BridgingBot
@GitterIRCbot
[slack] <frankschae> If you use it as in your code above, I think StochasticDiffEq will default to
WienerProcess!(..)
as the noise process, which uses then inplace functions. Do I get it then correctly that you'd like to use static arrays for the noise but normal arrays for the states?
[slack] <chrisrackauckas> the other thing is that, StochasticDiffEq had a regression show up in the latest benchmarks and this might be an indicator of that.
BridgingBot
@GitterIRCbot
[slack] <krishnab> Oh I see, so this kind of issue is the real deal. It is nice to see the places where there is work to be done 🙂. Thanks for the extended technical discussion @Brian Groenke and @chrisrackauckas. I can see the challenge of trying to adjust the solver to stiffness in both the time and spatial domains. I hope you find someone good who can work on it.
[slack] <Simon Welker> @frankschae My overall goal is just avoiding allocations wherever I can (and wherever it's useful). There's no particular need for me to use static arrays for noise (=return values of g?) while using normal arrays for the states (=return values of f?). I'm currently just trying to use static arrays as u0 so the solution matrix can be preallocated (so I'd save the nsteps allocations, replacing them by one larger allocation).
[slack] <chrisrackauckas> are the allocations effecting the time here?
BridgingBot
@GitterIRCbot
[slack] <Simon Welker> hmm, I'm not sure. I'd assume so esp. since I want to run this as a large ensemble simulation (10k+). Is there a way for me to tell, without running code for a fully nonallocating version?
[slack] <chrisrackauckas> Profile it
[slack] <Simon Welker> cheers, I'll try that soon. I was so far only looking at @btime
[slack] <Simon Welker> I guess even if it isn't worth it for my problem, generally allowing for this preallocation (e.g. via SArrays) could be useful for other problems?
[slack] <Simon Welker> (ah it's from 18337 -- I'm only on lecture 7 so far :D)
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> you can pre-allocate by passing in more arguments, but it's undocumented because I want to change and improve that interface.
BridgingBot
@GitterIRCbot
[slack] <Jonnie> LabelledArrays and ComponentArrays have some performance regressions for some of my test problems after the DiffEqBC->FastBroadcast switch. Writing a method for FastBroadcast.use_fast_broadcast fixes it, though. Is this something I should be opting into for ComponentArrays? Or is this going to be handled on the DifferentialEquations side?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> @yingbo_ma @elrodc how do we recover the opt-ins that were in
[slack] <chrisrackauckas> DiffEqBase?
[slack] <chrisrackauckas> and ComponentArrays had one
[slack] <yingbo_ma> use_fast_broadcast is not safe, so setting it to true by the developer manually would be good.
[slack] <yingbo_ma> I don’t think we should do opt-in
[slack] <chrisrackauckas> what about relying on the fast_scalar_indexing trait?
BridgingBot
@GitterIRCbot

[slack] <yingbo_ma> ```julia> a = rand(5); b = @views a[1:2:end];

julia> ArrayInterface.fast_scalar_indexing(b)
true```

[slack] <yingbo_ma> Not really.
[slack] <yingbo_ma> Also, we don’t know if the broadcast has fast scalar index
[slack] <yingbo_ma> I am not sure if there’s a good proxy for use_fast_broadcast . This decision should be done by the developer of the package.
BridgingBot
@GitterIRCbot
[slack] <yingbo_ma> Also, broadcast is very customizable, and FastBroadcast is very rigid. We should not just override custom broadcasting behavior in @..
BridgingBot
@GitterIRCbot
[slack] <elrodc> I think @.. on b should be fine, even if it's not a fast_scalar_indexing?
[slack] <elrodc> I agree though that we don't want to override custom broadcasting behavior though. If someone went through the trouble of customizing it for SparseArrays or StaticArrays, odds are we should stick with their implementation.
BridgingBot
@GitterIRCbot
[slack] <Dan Padilha> Is there a standard or easy way to store and extract the times (or timestep indices) when callbacks get triggered, and have this available in the resulting ODESolution?
BridgingBot
@GitterIRCbot
[slack] <Lütfullah Tomak> What is integ.uprev2 stands for in integrators? When I use Dual numbers, I want mostly partials with event boundaries are regarded but sometimes I need nondirectional partials at the event location too. Can I assume integ.uprev2 is the state without nondirectional partials? My quick check says it looks like but I want to confirm.
BridgingBot
@GitterIRCbot
[slack] <Lütfullah Tomak> I cannot see anywhere they are stored. You can add an empty vector to parameters than each time you fall into an affect! push the time of the event to this vector.
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> It's a second timepoint back used in some extrapolation schemes
[slack] <chrisrackauckas> It matches the type of u
[slack] <iv.borissov> Hi, @chrisrackauckas. Could you have a look at the issue, when you have time https://github.com/SciML/DiffEqCallbacks.jl/issues/30#issuecomment-828788228 The topic is old and I know there is no general solution for SavingCallback+Event , but maybe you can figure out what is the issue with the proposed workaround
BridgingBot
@GitterIRCbot
[slack] <Lütfullah Tomak> Thanks @chrisrackauckas. I have seen it wrong, it is not what I want.
BridgingBot
@GitterIRCbot
[slack] <Dan Padilha> Yeah that's one way, thanks. I feel like there should be something built in for this though. I ended up just tracking the time index in affect! by updating my integrator.u (which is a DEDataVector tracking some other data as well, so essentially integrator.u.event_trigger_idx = length(integrator.sol.t) ) and just taking the unique values in the resulting [u.event_trigger_idx for u in solution].
BridgingBot
@GitterIRCbot
[slack] <atiyo ghosh> Are there any Julia packages of choice for differential equation solutions via spectral methods? https://github.com/SciML/DiffEqApproxFun.jl is no longer in active development. I was looking at https://github.com/DedalusProject/dedalus in Python land and thinking that something similar in Julia would be awesome.
[slack] <Lütfullah Tomak> How the continuous callback conditions is stored? I have a callback set with some continuous callbacks but their condition functions differ in returning value type. Does this cause type unstability or hurt performance?
BridgingBot
@GitterIRCbot
[slack] <atiyo ghosh> Actually ApproxFun seems pretty well-featured.
JeremyB
@jrmbr

Hey ! I started using DifferentialsEquations recently and I have some small questions : I am trying to solve a simple ODE (a ball in a particular potential) but the potential I need to use is given to me as a .npy file. My workaround has been for now to interpolate the given array and compute the gradient in the derivative function.

const tstart = 0.0
const tend = 1e-5
const tstep = 5e-8

trap = npzread("somefile.npy")
x = npzread("somefile2.npy")
z = npzread("somefile3.npy")

# linear interpolation
interpolated_potential = LinearInterpolation((x, z), trap,extrapolation_bc=0)

function evolve(dz,z,p,t)
    p₁, p₂, q₁, q₂ = z[1], z[2], z[3], z[4]

    dp₁,dp₂ = -Interpolations.gradient(interpolated_potential,q₁,q₂)
    dq₁ = p₁
    dq₂ = p₂
    dz .= [dp₁, dp₂, dq₁, dq₂]
    return nothing
end

probl_func = ODEFunction(evolve,syms=[:vx,:vy,:x,:y])

function condition(u,t,integrator)
    u[3]>=x[end] || u[3]<=x[1] || u[4]>=z[end] || u[4]<=z[1]
end

function affect!(integrator)
    terminate!(integrator)
end

cb = DiscreteCallback(condition,affect!)

u₀ = [randvelocity()...,randpos()...]
probl = ODEProblem(probl_func, u₀, (tstart, tend))
sol = solve(probl, Vern9(),dt=tstep, abstol=1e-6, reltol=1e-6,adaptive=false,callback = cb)

However I run in some issues: When/If the ball is leaving the interpolated region, it throws an error. I tried to catch it with the callback, but for some reason, for some initial conditions it still crashes. Also I got a lot of allocations from the call to gradient in the derivative function it seems...

Is there a better way to deal with this problem ?