Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 08:34
  • 08:21
    WangWei90 starred JuliaDiff/FiniteDiff.jl
  • 08:00

    github-actions[bot] on v6.35.0

    (compare)

  • 08:00
    JuliaTagBot commented #359
  • 07:42
    JuliaRegistrator commented #107
  • 07:41
    ChrisRackauckas commented #107
  • 07:41

    ChrisRackauckas on master

    Update Project.toml (compare)

  • 07:41

    ChrisRackauckas on master

    fix SDEs + ReverseDiff add test section off the DiffEqFlux test and 3 more (compare)

  • 07:41
    ChrisRackauckas closed #360
  • 07:41
    ChrisRackauckas commented #360
  • 07:31

    kanav99 on gh-pages

    build based on 2d657f7 (compare)

  • 07:27

    github-actions[bot] on v6.49.1

    (compare)

  • 07:27

    kanav99 on v4.0.9

    (compare)

  • 07:26
    JuliaTagBot commented #606
  • 07:26
    JuliaTagBot commented #644
  • 07:25
    Libbum commented #360
  • 07:20
    DeepBlueLearning starred SciML/NeuralPDE.jl
  • 06:54

    kanav99 on gh-pages

    build based on 2d657f7 (compare)

  • 06:50
    JuliaRegistrator commented on e918a2d
  • 06:50
    YingboMa commented on e918a2d
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> Hmm I don't know those. Sounds fun. Reference?
[slack] <chrisrackauckas> The answer is almost certainly, yes.
[slack] <chrisrackauckas> that looks fantastic and a perfect fit.
[slack] <chrisrackauckas> @stillyslalom start by opening an issue with the reference so it's in the system
BridgingBot
@GitterIRCbot
[slack] <stillyslalom> SciML/DiffEqOperators.jl#278
BridgingBot
@GitterIRCbot

[slack] <Adedotun Agbemuko> #diffeq-bridged #sciml
This issue is based on the DifferentialEquations package:

To give a proper context of my problem, I have a system of equations i want to solve in time in the form
dx = Ax + Bu
where:
x = [x(1), x(2), ...x(24)] # state vector
u = [u(1), u(2), ...u(24)] # control reference input

I am attempting to implement a couple of hybrid callbacks. By this i mean one a discrete call back (e.g. to make a step change in one or more
of inputs u at arbitrary, but known times) and another a continuous call back for a continuous check of certain states x(13) to x(24).

In principle, the values of the solutions of x(13) to x(24) must be saturated if they exceed a value on the low or high side.
For example, if any of x(13):x(24) at any time is > 1.2 then it is saturated to 1.2. If it is less than -1.2 then the value is saturated at
-1.2. This saturation is important to the credibility of the solutions. It has to do with equipment rating/tolerance and actuator capability

The discrete call back alone was implemented and this worked perfectly as expected. However, when i try to combine with the continuous callback,
i get too many errors to understand or make sense. What am i doing wrong? below is a snippet of what am trying to do:

below is the discrete callback

function time2step_disco(x,t,integrator)
t==10.0
end

function step_change_disco!(integrator)
matrix_size = size(integrator.p)
integrator.p[6, matrix_size(2)]-= integrator.p[6, matrix_size(2)]
end

basically making driving input 6 to zero value from the original value by subtracting the original value from itself.

below is the continuous call back

function limit_check(I_limits,x,t,integrator)
I_limits[1] = x[13:24] > 1.2
I_limits[2] = x[13:24] < -1.2
end

function current_limiting!(integrator,idx)
if idx==1
integrator.u[13:24] = 1.2
elseif idx==2
integrator.u[13:24] = -1.2
end
end

Then in the call to the solver

Events

nodal_disconxn = DiscreteCallback(time2step_disco,step_change_disco!)
current_saturation = VectorContinuousCallback(limit_check,current_limiting!,2) # 2 implies there are only 2 checks made e.g. the high and low sides of the saturation curve
event_set = CallbackSet(nodal_disconxn,current_saturation)

so event_set is passed on to the ODEproblem.

abstractingagent
@abstractingagent
Nvm, scratch what I posted, I think all I have to do is create a "dataloader" that instead of creates k batches, creates multiple overlapping trajectories, and then just use the ncycles mini-batch implementation
just to clarify, the data argument in sciml_train is plugged into the loss in the back end right? assuming in x, y order , after params
abstractingagent
@abstractingagent
What does take(_data, maxiters) do in the sciml_train function, could only find take! as a native julia function when I did ?take in the REPL
abstractingagent
@abstractingagent
So it seems mini-batching is not effective at training a neural ode if you use the ncycle implementation. Sampling a trajectory and then continuously training on it until it fits and then widening the tspan interval for training was far more effective
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> interesting to note.
[slack] <chrisrackauckas> yeah sorry I'm behind on responses: crazy week going on here.
[slack] <chrisrackauckas> @Adedotun Agbemuko continuous callbacks need to define the condition as a rootfinding function'
[slack] <chrisrackauckas> you're defining a boolean, which isn't a rootfinding condition
abstractingagent
@abstractingagent
No worries btw, I always appreciate your input at your own pace
abstractingagent
@abstractingagent
If your using ODEProblem{false} configuration, and can't write the du equations explicitly, do you just write each equation line by line
abstractingagent
@abstractingagent
Despite increasing the success with incrementally increasing the tspan the NODE is trained on, for longer trajectory this method begins to fail, any ideas on how to overcome this?
abstractingagent
@abstractingagent
Turns out that mixing mini-batching and the incremental increase of the tspan together solves this problem
Is this idea already implemented? https://arxiv.org/pdf/2009.09457.pdf
abstractingagent
@abstractingagent
@ChrisRackauckas if I copy and paste the code here https://diffeqflux.sciml.ai/dev/GPUs/ for running a NODE on the GPU and run it with @btime, I get 62 s, but when I remove the |>gpu for the initial conditions and NN parameters, the @btime gives me 4.32s?
Christopher Rackauckas
@ChrisRackauckas
Yeah that example is too small to make effective use of gpus
The bridge seems broken
abstractingagent
@abstractingagent
How does one add a bias when using FastChain and FastDense?
abstractingagent
@abstractingagent
Anything I should do to fix the bridge on my end?
BridgingBot
@GitterIRCbot

[slack] <Dan Burns> I’d like to solve a simple ODE with units from Unitful, but I get an error related to initialization. MWE:
```using DifferentialEquations, Unitful

tspan = (0.0u"s", 10.0u"s")
x0 = [0.0u"m", 0.0u"m/s"]
F(t) = 1

double integrator in state-space form

A = [0u"s^-1" 1; 0u"s^-2" 0u"s^-1"]
B = [0u"m/s"; 1u"m/s^2"]
di(x,u,t) = Ax + Bu(t)

prob = ODEProblem(di, x0, tspan, F)
sol = solve(prob, Tsit5())Which yields:ERROR: ArgumentError: zero(Quantity{Float64,D,U} where U where D) not defined.
Stacktrace:
[1] zero(::Type{Quantity{Float64,D,U} where U where D}) at /Users/dan/.julia/packages/Unitful/m6pR3/src/quantities.jl:374`` This is the same error one gets when enteringzero(x0)` , which makes me suspect initialization. Any suggestions? Thanks in advance!

BridgingBot
@GitterIRCbot
[slack] <chen.tianc> ok, I guess I will stay with the lower version for now.
BridgingBot
@GitterIRCbot
[slack] <AlCap23> If you need the newer features you can also switch environments.
BridgingBot
@GitterIRCbot
[slack] <Dan Burns> Perhaps closest example I’ve found is here:
https://github.com/SciML/SciMLTutorials.jl/blob/master/markdown/type_handling/03-unitful.md
but this does not cover vector-valued initial conditions, which may be the problem?
BridgingBot
@GitterIRCbot
[slack] <SebastianM-C> Could you try to make x0 an ArrayPartition? I think that the problem is that you don't have a concrete type for the initial conditions, since you are using a simple array and the elements have different types (since they have different units)
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> @Dan Burns you might want to use a ComponentArray here. @Jonnie might have an example lying around.
BridgingBot
@GitterIRCbot
[slack] <Dan Burns> Thanks for the suggestion, @SebastianM-C, I think you’re on the right track. However, simply casting x0 as an ArrayPartition doesn’t seem to work, or at least gives a different error. I.e., when I use the MWE code from before with the change:
x0 = ArrayPartition([0.0u"m", 0.0u"m/s"])
I get the Error:
ERROR: LoadError: MethodError: Cannot `convert` an object of type Array{Quantity{Float64,D,U} where U where D,1} to an object of type ArrayPartition{Quantity{Float64,D,U} where U where D,Tuple{Array{Quantity{Float64,D,U} where U where D,1}}}
BridgingBot
@GitterIRCbot
[slack] <Dan Burns> Similar unhappiness with x0 = ArrayPartition(0.0u"m", 0.0u"m/s"), but the https://diffeq.sciml.ai/stable/features/diffeq_arrays/#ArrayPartitions indicate that this is the correct use case, so thanks for the pointer!
[slack] <Jonnie> @Dan Burns It works with with ComponentArrays if you replace the initial conditions with x0 = ComponentArray(pos=0.0u"m", vel=0.0u"m/s"). https://jonniedie.github.io/ComponentArrays.jl/stable/examples/adaptive_control/ is an example of using ComponentArrays for more deeply nested state-space problems too. It doesn't use Unitful.jl in this example, but there's no reason you wouldn't be able to do this with units as well.
BridgingBot
@GitterIRCbot

[slack] <Dan Burns> Thanks a ton, @Jonnie! That works!

Also, thanks for the link--MRAC is cool stuff and I wasn’t aware of that work being done in Julia. I’d love to chat about that and other controls topics with you at some point.

BridgingBot
@GitterIRCbot
[slack] <Jonnie> Yeah, definitely shoot me a message some time! Unfortunately I haven’t had much of a chance to do much MRAC stuff beyond that toy example, but I’d love to expand it a little more for some more interesting problems
BridgingBot
@GitterIRCbot
[slack] <Dan Burns> I definitely will! Thanks for this and for getting me unstuck with units and differential equations!
BridgingBot
@GitterIRCbot

[slack] <timkim> For this particular code, I'm finding that changing to FastDense and sciml_train throws an error.

```using DiffEqFlux, DifferentialEquations, Flux, Optim, Plots, DiffEqSensitivity
u0 = Float32[2.; 0.]
datasize = 100
tspan = (0.0f0,10.5f0)
dosetimes = [1.0,2.0,4.0,8.0]
function affect!(integrator)
integrator.u = integrator.u.+1
end
cb_ = PresetTimeCallback(dosetimes,affect!,save_positions=(false,false))
function trueODEfunc(du,u,p,t)
du .= -u
end
t = range(tspan[1],tspan[2],length=datasize)
prob = ODEProblem(trueODEfunc,u0,tspan)
odedata = Array(solve(prob,Tsit5(),callback=cb,saveat=t))
dudt2 = Chain(Dense(2,50,tanh),
Dense(50,2))
θ,re = Flux.destructure(dudt2) # use this p as the initial condition!
function dudt(du,u,p,t)
du[1:2] .= -u[1:2]
du[3:end] .= re(p)(u[1:2])
end
z0 = Float32[u0;u0]
prob = ODEProblem(dudt,z0,tspan)
affect!(integrator) = integrator.u[1:2] .= integrator.u[3:end]
cb = PresetTimeCallback(dosetimes,affect!,save_positions=(false,false))
function predict_n_ode()
_prob = remake(prob,p=θ)
Array(solve(_prob,Tsit5(),u0=z0,p=θ,callback=cb,saveat=t,sensealg=ReverseDiffAdjoint()))[1:2,:]
end
function loss_n_ode()
pred = predict_n_ode()
loss = sum(abs2,ode_data .- pred)
loss
end
loss_n_ode() # n_ode.p stores the initial parameters of the neural ODE

cba = function (;doplot=false) #callback function to observe training
pred = predict_n_ode()
display(sum(abs2,ode_data .- pred))

plot current prediction against data

pl = scatter(t,ode_data[1,:],label="data")
scatter!(pl,t,pred[1,:],label="prediction")
display(plot(pl))
return false
end

Display the ODE with the initial parameter values.

cba()

ps = Flux.params(θ)
data = Iterators.repeated((), 70)
Flux.train!(loss_node, ps, data, ADAM(0.05), cb = cba)Changing to:using DiffEqFlux, DifferentialEquations, Flux, Optim, Plots, DiffEqSensitivity
u0 = Float32[2.; 0.]
datasize = 100
tspan = (0.0f0,10.5f0)
dosetimes = [1.0,2.0,4.0,8.0]
function affect!(integrator)
integrator.u = integrator.u.+1
end
cb
= PresetTimeCallback(dosetimes,affect!,save_positions=(false,false))
function trueODEfunc(du,u,p,t)
du .= -u
end
t = range(tspan[1],tspan[2],length=datasize)
prob = ODEProblem(trueODEfunc,u0,tspan)
odedata = Array(solve(prob,Tsit5(),callback=cb,saveat=t))
dudt2 = FastChain(FastDense(2,50,tanh),
FastDense(50,2))
θ = initial_params(dudt2)

function dudt(du,u,p,t)
du[1:2] .= -u[1:2]
du[3:end] .= dudt2(u[1:2], p)
end
z0 = Float32[u0;u0]
prob = ODEProblem(dudt,z0,tspan)
affect!(integrator) = integrator.u[1:2] .= integrator.u[3:end]
cb = PresetTimeCallback(dosetimes,affect!,save_positions=(false,false))
function predict_n_ode(θ)
_prob = remake(prob,p=θ)
Array(solve(_prob,Tsit5(),u0=z0,p=θ,callback=cb,saveat=t,sensealg=ReverseDiffAdjoint()))[1:2,:]
end
function loss_n_ode(θ)
pred = predict_n_ode(θ)
loss = sum(abs2,ode_data .- pred)
loss
end
loss_n_ode(θ) # n_ode.p stores the initial parameters of the neural ODE

cba = function (;doplot=false) #callback function to observe training
pred = predict_n_ode()
display(sum(abs2,ode_data .- pred))

plot current prediction against data

pl = scatter(t,ode_data[1,:],label="data")
scatter!(pl,t,pred[1,:],label="prediction")
display(plot(pl))
return false
end

Display the ODE with the initial parameter values.

cba()

res = DiffEqFlux.sciml_train(loss_n_ode, θ, ADAM(0.05), cb = cba, maxiters = 70)`` Is this a bug with sciml_train? Also, what prevents us from usingsensealg` such as InterpolatingAdjoint() or BacksolveAdjoint() for this particular problem?

BridgingBot
@GitterIRCbot
[slack] <vaibhavdixit02> Would be helpful to see the error?
[slack] <timkim> I'm getting MethodError: no method matching (::var"#15#17")(::Array{Float32,1}, ::Float32) when I execute the latter snippet while the first snippet runs fine.
BridgingBot
@GitterIRCbot

[slack] <vaibhavdixit02> The issue is in the callback definition. Changing it to this should make it work fine

```cba = function (θ, pred;doplot=false) #callback function to observe training
pred = predict_n_ode(θ)
display(sum(abs2,ode_data .- pred))

     # plot current prediction against data
     pl = scatter(t,ode_data[1,:],label="data")
     scatter!(pl,t,pred[1,:],label="prediction")
     display(plot(pl))
     return false
   end```
BridgingBot
@GitterIRCbot
[slack] <timkim> Ah thank you for catching that! But that still didn't solve the issue for some reason...
[slack] <timkim> Still getting the same error
BridgingBot
@GitterIRCbot
[slack] <vaibhavdixit02> It worked for me, try from a fresh julia session I guess
BridgingBot
@GitterIRCbot

[slack] <timkim> I see, the issue was it should have been:
```cba = function (θ, loss, pred; doplot=false) #callback function to observe training
display(sum(abs2,ode_data .- pred))

# plot current prediction against data
pl = scatter(t,ode_data[1,:],label="data")
scatter!(pl,t,pred[1,:],label="prediction")
display(plot(pl))
return false

end```
Thank you!

BridgingBot
@GitterIRCbot
[slack] <Adam Gerlach> What is the current state of ApproxFun +DiffEq?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> if you build the constant sized operators it's standard
BridgingBot
@GitterIRCbot
[slack] <Simen Husøy> What approach would be preferred when implementing driving force to an ODE with >1000 states? The driving force is a vector with input sampled at even time intervals. I see that people use DiffEqCallbacks on small systems, is this also effective on large systems? The other solution I consider is to implement the driving force in the system model f(du, u, p, t) but in this solution have to implement either interpolation scheme or use solvers with fixed spacing in time. Any recommendations?
BridgingBot
@GitterIRCbot
[slack] <Simen Husøy> I will look into this. Thanks for the reply!
BridgingBot
@GitterIRCbot
[slack] <ranocha> An example for the wave equation can be found at https://github.com/ranocha/SummationByPartsOperators.jl/blob/master/notebooks/Wave_equation.ipynb