- Join over
**1.5M+ people** - Join over
**100K+ communities** - Free
**without limits** - Create
**your own community**

- 03:19prosoitos starred SciML/LabelledArrays.jl
- 02:22jcl4 starred SciML/DifferentialEquations.jl
- 01:48oeruelas starred SciML/DiffEqBayes.jl
- 00:30
christopher-dG on 5b07654b-done

- 00:30
christopher-dG on 5b07654b-done

https://gitlab.com/JuliaGPU/Dif… (compare)

- 00:24
christopher-dG on 347256ee-done

- 00:24github-actions[bot] commented #329
- 00:23
christopher-dG on 347256ee-done

https://gitlab.com/JuliaGPU/Dif… (compare)

- 00:22github-actions[bot] opened #329
- 00:22
christopher-dG on 347256ee

Rebuild type_handling/01-number… (compare)

- Sep 21 22:57seadra opened #676
- Sep 21 22:47maxpiot starred SciML/SciMLTutorials.jl
- Sep 21 21:42Vilin97 edited #131
- Sep 21 21:41TorkelE commented #229
- Sep 21 21:14hrgentry starred SciML/DifferentialEquations.jl
- Sep 21 20:30samuela commented #674
- Sep 21 20:00
github-actions[bot] on v6.31.6

- Sep 21 19:25juliandoerner starred SciML/DifferentialEquations.jl
- Sep 21 18:53JuliaRegistrator commented #107
- Sep 21 18:53ChrisRackauckas commented #107

[slack] <pure_interpeter> abstractingagent, do you want to see the two stage search no multiple shooting code or is that uninteresting for you?

Right now my differential equation is misbehaving lol

[slack] <pure_interpeter> https://pastebin.com/u9ZW1S6R My code. It's not much but it was good enough for my purposes.

I think there is a bug some where, trying to run it down before I bother anyone with it

[slack] <chrisrackauckas> global iters

[slack] <chrisrackauckas> iters +=1

[slack] <chrisrackauckas> if iters % 50

[slack] <pure_interpeter> Yeah, that sucks.

[slack] <chrisrackauckas> make a callbable type with an iters field?

[slack] <chrisrackauckas> A let block?

[slack] <chrisrackauckas> it's all the same thing.

[slack] <pure_interpeter> It sucks from a code cleanness standpoint

[slack] <pure_interpeter> A lisp dialect i like has this "function" (it would be implemented in a macro in Julia):: https://files.slack.com/files-pri/T68168MUP-F01BE8SLU2D/download/bildschirmfoto_vom_2020-09-20_23-31-57.png

[slack] <chrisrackauckas> It's mixing a ton of ideas and I'm not sure what I'm looking at. Is it the ensemble that's the problem? Or the callback?

[slack] <chrisrackauckas> How did ti get to there?

[slack] <chrisrackauckas> The simplified version is fine: SciML/DiffEqSensitivity.jl#247

[slack] <chrisrackauckas> (in iip)

[slack] <chrisrackauckas> ahh okay

[slack] <chrisrackauckas> open an issue on that. Thanks for reducing it.

mapping updates through the sciml_train function to my callback and loss through a loop proved to be tedious and buggy

Awesome, I'm on it - btw, I think there is a typo in the implementation for mini-batching found at this link, https://diffeqflux.sciml.ai/dev/examples/minibatch/

In the step by step walk through, the NN is created under "ann"

```
ann = FastChain(FastDense(1,8,tanh), FastDense(8,1,tanh))
pp = initial_params(ann)
prob = ODEProblem{false}(dudt_, u0, tspan, pp)
function dudt_(u,p,t)
ann(u, p).* u
end
```

but then the loss functions calls the NN using a different designation "n_ode", not used in the code elsewhere:

```
function predict_n_ode(p)
n_ode(u0,p)
end
function loss_n_ode(p, start, k)
pred = predict_n_ode(p)
loss = sum(abs2,ode_data[:,start:start+k] .- pred[:,start:start+k])
loss,pred
end
```

[slack] <chrisrackauckas> Hmm I don't know those. Sounds fun. Reference?

[slack] <chrisrackauckas> The answer is almost certainly, yes.

[slack] <stillyslalom> https://www.sciencedirect.com/science/article/pii/002199919290324R

[slack] <chrisrackauckas> that looks fantastic and a perfect fit.

[slack] <chrisrackauckas> @stillyslalom start by opening an issue with the reference so it's in the system

@ChrisRackauckas So I started writing the loss and predict function for the mini batching for multiple overlapping trajectories, wanted to start there first before I do any sampling of trajectories (I think datadrivendiffeq has a random burst sampling function)

basic idea was a series of overlapping, shorter burst of trajectories will be sampled from one larger time span, and then a loop is written , where the corresponding time span and IC are plugged into the loss through sciml - which remakes the prob in the predict function, and then solves to get values to train over for the loss to use

```
function predict(p, time_batch, u0)
tmp_prob = remake(prob, p=p, u0 = u0, tspan = (time_batch[1], time_batch[end]))
Array(solve(prob, Tsit5(), saveat = time_batch))
end
function loss(p, state_batch, time_batch)
u0 = [state_batch[i, 1] for i in 1:size(state_batch,2)]
pred = predict(p,time_batch, u0)
sum(abs2, batch .- pred)
end
```

Is this the wrong way to go about it? Was wondering if there was a cleaner way to plug things into the loss through sciml_train because of how the loss is treated there

This was a super naieve off the top of my head overall loop, a sliding vector approach, kind of like the the growing iterating boundary method

```
function train_trajectories(state::Matrix{Float64}, t::Vector{Float64}, pinit::Vector{Float32})
neural_params = Vector{Float32}([])
for i in 1:size(state, 1)
if i+20 > length(t)
break
elseif i <2
uₒ = [state[i, j] for j in 1:size(state,2)]
tspans = (t[i], t[i+20])
res = DiffEqFlux.sciml_train(p->loss2(p; tsteps = tsteps[tspans[1].<= tsteps .<= tspans[2]], u0 = uₒ),
pinit, ADAM(0.005), maxiters = 300, cb = neuralode_callback1)
neural_params = res.minimizer
else
uₒ = [state[i, j] for j in 1:size(state,2)]
tspans = (t[i], t[i+20])
res2 = DiffEqFlux.sciml_train(p->loss2(p; tsteps = tsteps[tspans[1].<= tsteps .<= tspans[2]], u0 = uₒ),
neural_params, ADAM(0.005), maxiters = 300, cb = neuralode_callback1)
neural_params = res2.minimizer
end
end
return neural_params
end
```

I think I copied the wrong experiment for the loss and predict above, but the idea is the same