github-actions[bot] on v6.35.0
ChrisRackauckas on master
Update Project.toml (compare)
ChrisRackauckas on master
fix SDEs + ReverseDiff add test section off the DiffEqFlux test and 3 more (compare)
kanav99 on gh-pages
build based on 2d657f7 (compare)
github-actions[bot] on v6.49.1
kanav99 on v4.0.9
kanav99 on gh-pages
build based on 2d657f7 (compare)
[slack] <Adedotun Agbemuko> #diffeq-bridged #sciml
This issue is based on the DifferentialEquations package:
To give a proper context of my problem, I have a system of equations i want to solve in time in the form
dx = Ax + Bu
where:
x = [x(1), x(2), ...x(24)] # state vector
u = [u(1), u(2), ...u(24)] # control reference input
I am attempting to implement a couple of hybrid callbacks. By this i mean one a discrete call back (e.g. to make a step change in one or more
of inputs u at arbitrary, but known times) and another a continuous call back for a continuous check of certain states x(13) to x(24).
In principle, the values of the solutions of x(13) to x(24) must be saturated if they exceed a value on the low or high side.
For example, if any of x(13):x(24) at any time is > 1.2 then it is saturated to 1.2. If it is less than -1.2 then the value is saturated at
-1.2. This saturation is important to the credibility of the solutions. It has to do with equipment rating/tolerance and actuator capability
The discrete call back alone was implemented and this worked perfectly as expected. However, when i try to combine with the continuous callback,
i get too many errors to understand or make sense. What am i doing wrong? below is a snippet of what am trying to do:
function time2step_disco(x,t,integrator)
t==10.0
end
function step_change_disco!(integrator)
matrix_size = size(integrator.p)
integrator.p[6, matrix_size(2)]-= integrator.p[6, matrix_size(2)]
end
basically making driving input 6 to zero value from the original value by subtracting the original value from itself.
function limit_check(I_limits,x,t,integrator)
I_limits[1] = x[13:24] > 1.2
I_limits[2] = x[13:24] < -1.2
end
function current_limiting!(integrator,idx)
if idx==1
integrator.u[13:24] = 1.2
elseif idx==2
integrator.u[13:24] = -1.2
end
end
Then in the call to the solver
nodal_disconxn = DiscreteCallback(time2step_disco,step_change_disco!)
current_saturation = VectorContinuousCallback(limit_check,current_limiting!,2) # 2 implies there are only 2 checks made e.g. the high and low sides of the saturation curve
event_set = CallbackSet(nodal_disconxn,current_saturation)
so event_set is passed on to the ODEproblem.
[slack] <Dan Burns> I’d like to solve a simple ODE with units from Unitful
, but I get an error related to initialization. MWE:
```using DifferentialEquations, Unitful
tspan = (0.0u"s", 10.0u"s")
x0 = [0.0u"m", 0.0u"m/s"]
F(t) = 1
A = [0u"s^-1" 1; 0u"s^-2" 0u"s^-1"]
B = [0u"m/s"; 1u"m/s^2"]
di(x,u,t) = Ax + Bu(t)
prob = ODEProblem(di, x0, tspan, F)
sol = solve(prob, Tsit5())Which yields:
ERROR: ArgumentError: zero(Quantity{Float64,D,U} where U where D) not defined.
Stacktrace:
[1] zero(::Type{Quantity{Float64,D,U} where U where D}) at /Users/dan/.julia/packages/Unitful/m6pR3/src/quantities.jl:374``
This is the same error one gets when entering
zero(x0)` , which makes me suspect initialization. Any suggestions? Thanks in advance!
x0
an ArrayPartition? I think that the problem is that you don't have a concrete type for the initial conditions, since you are using a simple array and the elements have different types (since they have different units)
x0 = ArrayPartition([0.0u"m", 0.0u"m/s"])
ERROR: LoadError: MethodError: Cannot `convert` an object of type Array{Quantity{Float64,D,U} where U where D,1} to an object of type ArrayPartition{Quantity{Float64,D,U} where U where D,Tuple{Array{Quantity{Float64,D,U} where U where D,1}}}
x0 = ArrayPartition(0.0u"m", 0.0u"m/s")
, but the https://diffeq.sciml.ai/stable/features/diffeq_arrays/#ArrayPartitions indicate that this is the correct use case, so thanks for the pointer!
x0 = ComponentArray(pos=0.0u"m", vel=0.0u"m/s")
. https://jonniedie.github.io/ComponentArrays.jl/stable/examples/adaptive_control/ is an example of using ComponentArrays for more deeply nested state-space problems too. It doesn't use Unitful.jl in this example, but there's no reason you wouldn't be able to do this with units as well.
[slack] <timkim> For this particular code, I'm finding that changing to FastDense and sciml_train throws an error.
```using DiffEqFlux, DifferentialEquations, Flux, Optim, Plots, DiffEqSensitivity
u0 = Float32[2.; 0.]
datasize = 100
tspan = (0.0f0,10.5f0)
dosetimes = [1.0,2.0,4.0,8.0]
function affect!(integrator)
integrator.u = integrator.u.+1
end
cb_ = PresetTimeCallback(dosetimes,affect!,save_positions=(false,false))
function trueODEfunc(du,u,p,t)
du .= -u
end
t = range(tspan[1],tspan[2],length=datasize)
prob = ODEProblem(trueODEfunc,u0,tspan)
odedata = Array(solve(prob,Tsit5(),callback=cb,saveat=t))
dudt2 = Chain(Dense(2,50,tanh),
Dense(50,2))
θ,re = Flux.destructure(dudt2) # use this p as the initial condition!
function dudt(du,u,p,t)
du[1:2] .= -u[1:2]
du[3:end] .= re(p)(u[1:2])
end
z0 = Float32[u0;u0]
prob = ODEProblem(dudt,z0,tspan)
affect!(integrator) = integrator.u[1:2] .= integrator.u[3:end]
cb = PresetTimeCallback(dosetimes,affect!,save_positions=(false,false))
function predict_n_ode()
_prob = remake(prob,p=θ)
Array(solve(_prob,Tsit5(),u0=z0,p=θ,callback=cb,saveat=t,sensealg=ReverseDiffAdjoint()))[1:2,:]
end
function loss_n_ode()
pred = predict_n_ode()
loss = sum(abs2,ode_data .- pred)
loss
end
loss_n_ode() # n_ode.p stores the initial parameters of the neural ODE
cba = function (;doplot=false) #callback function to observe training
pred = predict_n_ode()
display(sum(abs2,ode_data .- pred))
pl = scatter(t,ode_data[1,:],label="data")
scatter!(pl,t,pred[1,:],label="prediction")
display(plot(pl))
return false
end
cba()
ps = Flux.params(θ)
data = Iterators.repeated((), 70)
Flux.train!(loss_node, ps, data, ADAM(0.05), cb = cba)Changing to:
using DiffEqFlux, DifferentialEquations, Flux, Optim, Plots, DiffEqSensitivity
u0 = Float32[2.; 0.]
datasize = 100
tspan = (0.0f0,10.5f0)
dosetimes = [1.0,2.0,4.0,8.0]
function affect!(integrator)
integrator.u = integrator.u.+1
end
cb = PresetTimeCallback(dosetimes,affect!,save_positions=(false,false))
function trueODEfunc(du,u,p,t)
du .= -u
end
t = range(tspan[1],tspan[2],length=datasize)
prob = ODEProblem(trueODEfunc,u0,tspan)
odedata = Array(solve(prob,Tsit5(),callback=cb,saveat=t))
dudt2 = FastChain(FastDense(2,50,tanh),
FastDense(50,2))
θ = initial_params(dudt2)
function dudt(du,u,p,t)
du[1:2] .= -u[1:2]
du[3:end] .= dudt2(u[1:2], p)
end
z0 = Float32[u0;u0]
prob = ODEProblem(dudt,z0,tspan)
affect!(integrator) = integrator.u[1:2] .= integrator.u[3:end]
cb = PresetTimeCallback(dosetimes,affect!,save_positions=(false,false))
function predict_n_ode(θ)
_prob = remake(prob,p=θ)
Array(solve(_prob,Tsit5(),u0=z0,p=θ,callback=cb,saveat=t,sensealg=ReverseDiffAdjoint()))[1:2,:]
end
function loss_n_ode(θ)
pred = predict_n_ode(θ)
loss = sum(abs2,ode_data .- pred)
loss
end
loss_n_ode(θ) # n_ode.p stores the initial parameters of the neural ODE
cba = function (;doplot=false) #callback function to observe training
pred = predict_n_ode()
display(sum(abs2,ode_data .- pred))
pl = scatter(t,ode_data[1,:],label="data")
scatter!(pl,t,pred[1,:],label="prediction")
display(plot(pl))
return false
end
cba()
res = DiffEqFlux.sciml_train(loss_n_ode, θ, ADAM(0.05), cb = cba, maxiters = 70)``
Is this a bug with sciml_train? Also, what prevents us from using
sensealg` such as InterpolatingAdjoint() or BacksolveAdjoint() for this particular problem?
MethodError: no method matching (::var"#15#17")(::Array{Float32,1}, ::Float32)
when I execute the latter snippet while the first snippet runs fine.
[slack] <vaibhavdixit02> The issue is in the callback definition. Changing it to this should make it work fine
```cba = function (θ, pred;doplot=false) #callback function to observe training
pred = predict_n_ode(θ)
display(sum(abs2,ode_data .- pred))
# plot current prediction against data
pl = scatter(t,ode_data[1,:],label="data")
scatter!(pl,t,pred[1,:],label="prediction")
display(plot(pl))
return false
end```
[slack] <timkim> I see, the issue was it should have been:
```cba = function (θ, loss, pred; doplot=false) #callback function to observe training
display(sum(abs2,ode_data .- pred))
# plot current prediction against data
pl = scatter(t,ode_data[1,:],label="data")
scatter!(pl,t,pred[1,:],label="prediction")
display(plot(pl))
return false
end```
Thank you!
DiffEqCallbacks
on small systems, is this also effective on large systems? The other solution I consider is to implement the driving force in the system model f(du, u, p, t)
but in this solution have to implement either interpolation scheme or use solvers with fixed spacing in time. Any recommendations?
dudt2 = Chain(DenseAbs(1,2),
DenseExp(2,1))
p,re = Flux.destructure(dudt2) # use this p as the initial condition!
# need to restrcture for backprop!
ps = params(dudt2)
delete!(ps, dudt2[2].W)
delete!(ps, dudt2[2].b)
ps = collect(Iterators.flatten(ps))
dudt(u,p,t) = re(ps)(u)
prob = ODEProblem(dudt,u0,tspan)
cb1()
loss_n_ode()
predict_n_ode()
#=
train_flux_model(ode_data, tsteps, 5, 500)
final_tuning_flux(ode_data, 2000)
=#
Flux.params(dudt2)
data = Iterators.repeated((), 10)
Flux.train!(loss_n_ode, Flux.params(u0,ps), data,
Flux.Optimise.AMSGrad(0.01), cb = cb1)