Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 22:13
    MizutoKadowaki0312 starred SciML/DiffEqBase.jl
  • 21:56
    JuliaRegistrator commented #157
  • 21:56
    ChrisRackauckas commented #157
  • 21:56

    ChrisRackauckas on master

    Update Project.toml (compare)

  • 21:55

    ChrisRackauckas on diff

    (compare)

  • 21:55

    ChrisRackauckas on master

    move util function to src/utils… Add tests and renaming Handle nested diff and 2 more (compare)

  • 21:55
    ChrisRackauckas closed #625
  • 21:26
    YingboMa synchronize #625
  • 21:26

    YingboMa on diff

    Fix diff2term (compare)

  • 21:09
    YingboMa opened #625
  • 21:07

    YingboMa on diff

    Handle nested diff (compare)

  • 16:47

    YingboMa on diff

    Add tests and renaming (compare)

  • 16:46

    YingboMa on diff

    move util function to src/utils… Add tests and renaming (compare)

  • 15:07
    nguyenkhoa0209 starred SciML/SciMLBenchmarks.jl
  • 14:43
    ChrisRackauckas commented #624
  • 13:33
    Sciemon starred SciML/ModelingToolkit.jl
  • 12:37
    michakraus edited #624
  • 12:33
    michakraus edited #624
  • 12:11
    michakraus opened #624
abstractingagent
@abstractingagent
Right now my differential equation is misbehaving lol
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> Problems with finite escape time?
[slack] <pure_interpeter> https://pastebin.com/u9ZW1S6R My code. It's not much but it was good enough for my purposes.
abstractingagent
@abstractingagent
I really appreciate you reaching out and sharing your perspective
I think there is a bug some where, trying to run it down before I bother anyone with it
abstractingagent
@abstractingagent
Any ideas on how to stop the graph from flashing like a strobe light, from all this NODE experimenting I'm walking away from my chair like I got hit by the MIB flash several times over
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> My code was developed in atom so i am assuming a plot panel somewhere, if the shifting of the axis annoys you, you can fix the y coordinates with ylim= in the plot call if i remember right
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> make it plot every few iterations?
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> I haven't found a nice way to do that, and throwing coins isn't optimal.
[slack] <chrisrackauckas> global iters
[slack] <chrisrackauckas> iters +=1
[slack] <chrisrackauckas> if iters % 50
[slack] <pure_interpeter> Yeah, that sucks.
[slack] <chrisrackauckas> make a callbable type with an iters field?
[slack] <chrisrackauckas> A let block?
[slack] <chrisrackauckas> it's all the same thing.
[slack] <pure_interpeter> It sucks from a code cleanness standpoint
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> A lisp dialect i like has this "function" (it would be implemented in a macro in Julia):: https://files.slack.com/files-pri/T68168MUP-F01BE8SLU2D/download/bildschirmfoto_vom_2020-09-20_23-31-57.png
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> open an issue on DiffEqSensitivity.
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> what in this is actually the issue?
[slack] <chrisrackauckas> It's mixing a ton of ideas and I'm not sure what I'm looking at. Is it the ensemble that's the problem? Or the callback?
[slack] <chrisrackauckas> How did ti get to there?
[slack] <chrisrackauckas> The simplified version is fine: SciML/DiffEqSensitivity.jl#247
[slack] <chrisrackauckas> (in iip)
BridgingBot
@GitterIRCbot
[slack] <timkim> It seems like the same error occurs even without callbacks (changed it to be ensemble of initial points). If we don't let it be an ensemble it doesn't throw an error. So I suspect the problem is most likely with the ensemble.
[slack] <chrisrackauckas> ahh okay
[slack] <chrisrackauckas> open an issue on that. Thanks for reducing it.
abstractingagent
@abstractingagent
I am guessing that there is no method to train on multiple smaller overlapping trajectories and that the only way is to pre-create the mini-batches using flux's dataloader, and train the NODE like that?
mapping updates through the sciml_train function to my callback and loss through a loop proved to be tedious and buggy
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> it would be nice to have a helper function for creating minibatches like that.
abstractingagent
@abstractingagent
I have to use one for what I am doing, i'll start on something and if you want you can help me optimize for other peoples use?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> sure
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> oh oops
BridgingBot
@GitterIRCbot
[slack] <ericphanson> Why does it suck? Seems like it literally and straight forwardly describes what you want. Every x number of iterations do y
BridgingBot
@GitterIRCbot
[slack] <pure_interpeter> Because i have introduce a new global variable and i can't express what i actually want to do.
BridgingBot
@GitterIRCbot
[slack] <stillyslalom> @chrisrackauckas any room for compact (Padé-like) finite difference operators in DiffEqOperators.jl?
[slack] <chrisrackauckas> Hmm I don't know those. Sounds fun. Reference?
[slack] <chrisrackauckas> The answer is almost certainly, yes.
[slack] <chrisrackauckas> that looks fantastic and a perfect fit.
[slack] <chrisrackauckas> @stillyslalom start by opening an issue with the reference so it's in the system
BridgingBot
@GitterIRCbot
[slack] <stillyslalom> SciML/DiffEqOperators.jl#278
BridgingBot
@GitterIRCbot

[slack] <Adedotun Agbemuko> #diffeq-bridged #sciml
This issue is based on the DifferentialEquations package:

To give a proper context of my problem, I have a system of equations i want to solve in time in the form
dx = Ax + Bu
where:
x = [x(1), x(2), ...x(24)] # state vector
u = [u(1), u(2), ...u(24)] # control reference input

I am attempting to implement a couple of hybrid callbacks. By this i mean one a discrete call back (e.g. to make a step change in one or more
of inputs u at arbitrary, but known times) and another a continuous call back for a continuous check of certain states x(13) to x(24).

In principle, the values of the solutions of x(13) to x(24) must be saturated if they exceed a value on the low or high side.
For example, if any of x(13):x(24) at any time is > 1.2 then it is saturated to 1.2. If it is less than -1.2 then the value is saturated at
-1.2. This saturation is important to the credibility of the solutions. It has to do with equipment rating/tolerance and actuator capability

The discrete call back alone was implemented and this worked perfectly as expected. However, when i try to combine with the continuous callback,
i get too many errors to understand or make sense. What am i doing wrong? below is a snippet of what am trying to do:

below is the discrete callback

function time2step_disco(x,t,integrator)
t==10.0
end

function step_change_disco!(integrator)
matrix_size = size(integrator.p)
integrator.p[6, matrix_size(2)]-= integrator.p[6, matrix_size(2)]
end

basically making driving input 6 to zero value from the original value by subtracting the original value from itself.

below is the continuous call back

function limit_check(I_limits,x,t,integrator)
I_limits[1] = x[13:24] > 1.2
I_limits[2] = x[13:24] < -1.2
end

function current_limiting!(integrator,idx)
if idx==1
integrator.u[13:24] = 1.2
elseif idx==2
integrator.u[13:24] = -1.2
end
end

Then in the call to the solver

Events

nodal_disconxn = DiscreteCallback(time2step_disco,step_change_disco!)
current_saturation = VectorContinuousCallback(limit_check,current_limiting!,2) # 2 implies there are only 2 checks made e.g. the high and low sides of the saturation curve
event_set = CallbackSet(nodal_disconxn,current_saturation)

so event_set is passed on to the ODEproblem.

abstractingagent
@abstractingagent
Nvm, scratch what I posted, I think all I have to do is create a "dataloader" that instead of creates k batches, creates multiple overlapping trajectories, and then just use the ncycles mini-batch implementation
just to clarify, the data argument in sciml_train is plugged into the loss in the back end right? assuming in x, y order , after params
abstractingagent
@abstractingagent
What does take(_data, maxiters) do in the sciml_train function, could only find take! as a native julia function when I did ?take in the REPL
abstractingagent
@abstractingagent
So it seems mini-batching is not effective at training a neural ode if you use the ncycle implementation. Sampling a trajectory and then continuously training on it until it fits and then widening the tspan interval for training was far more effective
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> interesting to note.
[slack] <chrisrackauckas> yeah sorry I'm behind on responses: crazy week going on here.
[slack] <chrisrackauckas> @Adedotun Agbemuko continuous callbacks need to define the condition as a rootfinding function'