Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 00:41
    github-actions[bot] opened #349
  • 00:41

    github-actions[bot] on new_version

    CompatHelper: bump compat for "… (compare)

  • 00:41
    github-actions[bot] opened #348
  • 00:41

    github-actions[bot] on new_version

    CompatHelper: bump compat for "… (compare)

  • 00:40

    christopher-dG on e2d6ebe3-done

    (compare)

  • 00:40

    christopher-dG on e2d6ebe3-done

    https://gitlab.com/JuliaGPU/Dif… (compare)

  • 00:39
    github-actions[bot] opened #347
  • 00:39

    github-actions[bot] on new_version

    CompatHelper: bump compat for "… (compare)

  • 00:33

    christopher-dG on 97c99f0a-done

    (compare)

  • 00:33
    github-actions[bot] commented #346
  • 00:33

    christopher-dG on 97c99f0a-done

    https://gitlab.com/JuliaGPU/Dif… (compare)

  • 00:31
    github-actions[bot] opened #346
  • 00:31

    christopher-dG on 97c99f0a

    Rebuild model_inference/01-pend… (compare)

  • 00:13
    ebgoldstein starred SciML/SciMLTutorials.jl
  • 00:02
  • Oct 27 23:22
    github-actions[bot] opened #167
  • Oct 27 23:22

    github-actions[bot] on new_version

    CompatHelper: bump compat for "… (compare)

  • Oct 27 23:20
    github-actions[bot] opened #270
  • Oct 27 23:20

    github-actions[bot] on new_version

    CompatHelper: bump compat for "… (compare)

  • Oct 27 23:18
    github-actions[bot] opened #72
abstractingagent
@abstractingagent

On the section in the diffeqflux that talks about the strategies to avoid local minima, the heading says "Iterative Growing Of Fits to Reduce" - shouldn't it say Iterative Growing Of Fits to Reduce Local Minima?

link: https://diffeqflux.sciml.ai/dev/examples/local_minima/#Strategies-to-Avoid-Local-Minima-1

BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> oh yes hehe
abstractingagent
@abstractingagent
I am trying to build a case for engineers at work to use these tools for system identification under actuation, but I need to properly demonstrate on toy problems with increasing complexity that these networks do indeed capture the dynamics and don't just curve fit
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> play with the network size too
[slack] <chrisrackauckas> you just have to play with how and what's being trained
[slack] <chrisrackauckas> and decrease the learning rates if it's not very stable (which is the wobbling
abstractingagent
@abstractingagent
Makes sense, smaller learning rate means not overshooting, and it worked - the method fit (0,5) time span really well, but then when I tried to make it extrapolate over a larger range (0,100) it doesn't do too well. Instead of plugging in the controls directly into the neural network, I placed them outside and added them as they were in the original ODE, and it worked even better, extrapolating extremely well - going to increase the complexity now, thank you
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> abstracting agent, multiple shooting might also be interesting, in order to flatten local minimas
BridgingBot
@GitterIRCbot

[slack] <timkim> I found that
```using DifferentialEquations, DiffEqSensitivity, Flux, Test

function _affect!(integrator, k)
integrator.u = integrator.u .+ 1
end

cbs = []

N = 10

for i = 1:N
affect! = (integrator) -> _affect!(integrator, i)
cb = PresetTimeCallback(Float32[0.1, 1.2],affect!,save_positions=(false,false))
push!(cbs, cb)
end

tspan = (0.f0, 3.f0)
teval = sort(tspan[2]*rand(23) .+ tspan[1])

pa = [1.0]
u0 = [3.0]

prob = ODEProblem((u, p, t) -> 1.01u .* p, u0, (0.0, 3.0), pa)

function prob_func(prob, i, repeat)
remake(prob, callback=cbs[i])
end

function model2()
ensemble_prob = EnsembleProblem(prob, prob_func = prob_func)
Array(solve(ensemble_prob, Tsit5(), EnsembleThreads(), saveat=teval, sensealg=ReverseDiffAdjoint(), trajectories=N))
end

loss() = sum(model2())

pa = [1.0]
u0 = [3.0]
opt = ADAM(0.1)
println("Starting to train")
l1 = loss()

cb = function ()
end

data = Iterators.repeated((), 10)

Flux.@epochs 10 Flux.train!(loss, params([pa,u0]), data, opt; cb = cb)
l2 = loss()
@test 10l2 < l1`` This throws anMethodError: Cannot convert an object of type ReverseDiff.TrackedReal{Float64,Float64,ReverseDiff.TrackedArray{Float64,Float64,1,Array{Float64,1},Array{Float64,1}}} to an object of type ReverseDiff.TrackedArray{Float64,Float64,1,Array{Float64,1},Array{Float64,1}}error. When I changesensealg` to default, the error didn't occur. Is this a bug? What would be a way to fix this?

abstractingagent
@abstractingagent
@pure_interpeter yes I was thinking something along the same lines, trying it today, will report back to the lobby if successful
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> I had much success overcomming local minima in the lotka-volterra parameter estimation using multiple shooting
abstractingagent
@abstractingagent
What was the criteria that you used to sample your trajectories?
I stepped my test problem up a notch, an ODE for a large scale bioreactor
I possibly bit off more than I can chew, but my goal is to use all the techniques
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> Well, my task was parameter estimation, not something with trajectories.
[slack] <pure_interpeter> (Although they are similar)
abstractingagent
@abstractingagent
Right, right - sorry didn't read that properly
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> I did it using DiffEqFlux because back when i did it, it had the best support. So i looked at the data i wanted to interpolate and divided it into multiple shoots of the length of the period in my data. This overcame the non convexity of my problem that appeared when the difference in the period was some rational like 4/3 which got a global fitness fuction stuck badly.
[slack] <pure_interpeter> In the end i did go with a global Optim.ParticleSwarmState() and an Adam in the second step to refine the best solution. (I don't find my multiple shooting prototype anymore, though)
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> abstractingagent, do you want to see the two stage search no multiple shooting code or is that uninteresting for you?
abstractingagent
@abstractingagent
Yeah, it would be helpful to see the problem from a different light
Right now my differential equation is misbehaving lol
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> Problems with finite escape time?
[slack] <pure_interpeter> https://pastebin.com/u9ZW1S6R My code. It's not much but it was good enough for my purposes.
abstractingagent
@abstractingagent
I really appreciate you reaching out and sharing your perspective
I think there is a bug some where, trying to run it down before I bother anyone with it
abstractingagent
@abstractingagent
Any ideas on how to stop the graph from flashing like a strobe light, from all this NODE experimenting I'm walking away from my chair like I got hit by the MIB flash several times over
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> My code was developed in atom so i am assuming a plot panel somewhere, if the shifting of the axis annoys you, you can fix the y coordinates with ylim= in the plot call if i remember right
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> make it plot every few iterations?
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> I haven't found a nice way to do that, and throwing coins isn't optimal.
[slack] <chrisrackauckas> global iters
[slack] <chrisrackauckas> iters +=1
[slack] <chrisrackauckas> if iters % 50
[slack] <pure_interpeter> Yeah, that sucks.
[slack] <chrisrackauckas> make a callbable type with an iters field?
[slack] <chrisrackauckas> A let block?
[slack] <chrisrackauckas> it's all the same thing.
[slack] <pure_interpeter> It sucks from a code cleanness standpoint
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> A lisp dialect i like has this "function" (it would be implemented in a macro in Julia):: https://files.slack.com/files-pri/T68168MUP-F01BE8SLU2D/download/bildschirmfoto_vom_2020-09-20_23-31-57.png
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> open an issue on DiffEqSensitivity.
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> what in this is actually the issue?
[slack] <chrisrackauckas> It's mixing a ton of ideas and I'm not sure what I'm looking at. Is it the ensemble that's the problem? Or the callback?
[slack] <chrisrackauckas> How did ti get to there?
[slack] <chrisrackauckas> The simplified version is fine: SciML/DiffEqSensitivity.jl#247
[slack] <chrisrackauckas> (in iip)
BridgingBot
@GitterIRCbot
[slack] <timkim> It seems like the same error occurs even without callbacks (changed it to be ensemble of initial points). If we don't let it be an ensemble it doesn't throw an error. So I suspect the problem is most likely with the ensemble.
[slack] <chrisrackauckas> ahh okay
[slack] <chrisrackauckas> open an issue on that. Thanks for reducing it.
abstractingagent
@abstractingagent
I am guessing that there is no method to train on multiple smaller overlapping trajectories and that the only way is to pre-create the mini-batches using flux's dataloader, and train the NODE like that?
mapping updates through the sciml_train function to my callback and loss through a loop proved to be tedious and buggy