## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• 01:12
• Dec 07 23:55
• Dec 07 22:49
• Dec 07 21:37
• Dec 07 20:57
• Dec 07 20:06
github-actions[bot] opened #133
• Dec 07 20:06

github-actions[bot] on new_version

CompatHelper: bump compat for "… (compare)

• Dec 07 20:02
github-actions[bot] opened #84
• Dec 07 20:02

github-actions[bot] on new_version

CompatHelper: bump compat for "… (compare)

• Dec 07 18:57
ViralBShah edited #223
• Dec 07 18:57
ViralBShah opened #223
• Dec 07 17:13

ChrisRackauckas on new_version

• Dec 07 17:13
ChrisRackauckas closed #131
• Dec 07 17:13

ChrisRackauckas on master

try multi-output global sensiti… fix for multi-output to complet… Merge remote-tracking branch 'o… and 2 more (compare)

• Dec 07 17:13
ChrisRackauckas closed #132
• Dec 07 16:58
ChrisRackauckas synchronize #132
• Dec 07 16:58

ChrisRackauckas on multi-output

fix dependency usage (compare)

• Dec 07 16:36
gaeblerh commented #539
• Dec 07 06:03
staticfloat commented #539
BridgingBot
@GitterIRCbot
[slack] <arnavs> and
[slack] <arnavs> prob = ODEProblem(lotka_volterra,u0,tspan,p)
[slack] <arnavs> @chrisrackauckas I notice in the README vs the blog there’s an additional param() call around the parameters
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> then it's probably necessary
[slack] <chrisrackauckas> does that work?
[slack] <chrisrackauckas> (I wouldn't spend too much time learning the oddities of Tracker though)
[slack] <arnavs> I think that does it… now I’m getting errors related to the ODE calls, instead of tracker errors
[slack] <arnavs> thanks
[slack] <chrisrackauckas> interesting
[slack] <arnavs> well the ODE is on me, just taking an xeponent of a thing that shouldn’t be negative
[slack] <chrisrackauckas> :shrug:
[slack] <chrisrackauckas> oh okay, so you modified the example towards your problem?
[slack] <arnavs> yeah
[slack] <arnavs> but first I just tried to run the blog code
[slack] <arnavs> it also looks like it’s called diffeq_adjoint now and not diffeq_rd
[slack] <chrisrackauckas> depends on what you plan on doing with it
[slack] <chrisrackauckas> those are two different layers, just different gradient calculations
[slack] <arnavs> gotcha.
BridgingBot
@GitterIRCbot

[slack] <yakir12> The way I'm solving this (and please let me know if I'm missing the point here) is by using Optim.optimize to find the value of the parameter that results in the ODE terminating at the coordinate I already know. I'm using calbacks to make sure the ODE terminates at a set distance from the center of the track.

If you'd like I can write a more comprehensible explanation of what it is I'm doing.

matrixbot
@matrixbot
BarrOff Using callbacks, which rootfinding method is used by default (for example: regulae-falsi,Newton,...)?
BridgingBot
@GitterIRCbot
This message was deleted
[slack] <latex_for_slack> \dddot{x} + A\ddot{x} + \dot{x} - |x| + 1 = 0 (posted by asinghvi17)
[slack] <asinghvi17> Does DiffEq have support for 3rd order DAEs, like the one above?
[slack] <chrisrackauckas> @asinghvi17 change it to a first order
BridgingBot
@GitterIRCbot
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> yup, I saw that
[slack] <chrisrackauckas> re-writing all of your projects into TensorFlow is something you could do instead of differentiable programming, but personally, that seems like a crazy thing to do to productivity :shrug:
BridgingBot
@GitterIRCbot
[slack] <ivan.yashchuk> I don't get this trend of "differentiable physics", it was available for a long time and people use that in pde-constrained optimisation. What's the point of writing primitive solvers in tf/pytorch. Having a model written in FEniCS with help of pyadjoint gives that "differentiable physics framework" and it works in parallel and scales well.
BridgingBot
@GitterIRCbot
[slack] <krishnab> @chrisrackauckas, @ivan.yashchuk this is an interesting point. There is so much more usage of the words "differentiable programming" nowadays that it gets confusing to understand what is actually new and what is "the same stuff as before but with a different name." I am still new to the differential equations world, but Is the point of this github project that they can run the fluid simulation using a parallelized solver that uses a neural network approach as opposed to something like Finite Elements?
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> who knows what they're doing it for. A lot of people are doing it for the wrong reasons
[slack] <chrisrackauckas> pyadjoint is an interesting direction to go, and we actually have a GSoC to hook up Zygote with FEniCS.jl for that
[slack] <chrisrackauckas> I really hope someone takes up that project
[slack] <SebastianM-C> Can I have some kind of recursive broadcast with RecursiveArrayTools?
[slack] <chrisrackauckas> I never finished implementing that :shrug:
[slack] <SebastianM-C> Any pointers if I am trying to implement it?
I have vectors of SVectors (or similar stuff) with units and I want to be able to ustrip and uconvert
BridgingBot
@GitterIRCbot
[slack] <chrisrackauckas> multiscalearrays.jl has a recursive broadcast that you'd have to copy over and modify

[slack] <krishnab> Ahh, interesting. So what do you mean by the wrong reasons. I am wondering if you have some sort of numerical methods criteria in mind? Like would a more conventional solver give better accuracy compared to a differentiable approach in some of these cases? I read the paper that you and Mike Innes wrote, but I don't remember if you wrote any criteria in there for comparing when a normal ode/pde solver works better than a differentiable approach.

I get that one of the key use cases of a diff programming approach is to use it when we don't know the underlying diffeq or such. But is there a good way to estimate or evaluate when a differentiable programming approach is better than a traditional solver. This is probably an open research area 🙂.

[slack] <chrisrackauckas> in most cases it's slower if you know a traditional method
[slack] <chrisrackauckas> it's good if you augment a traditional method with some extra stuff, or if you're dealing with something that cannot be handled by traditional methods (like high dimensional PDEs)
[slack] <chrisrackauckas> but if you just stick a neural network where a well-tuned solver already existed, it doesn't do magic.
[slack] <SebastianM-C> Thanks
I'll have a look
[slack] <krishnab> Got you. That makes a lot more sense. So the traditional solver makes more assumptions, and the corresponding algorithm can then be optimized around those assumptions. A differentiable approach makes less assumptions so it has to do more work to learn that structure.
[slack] <chrisrackauckas> it could make more assumptions about structure
[slack] <chrisrackauckas> but most people aren't doing that
[slack] <krishnab> Yep. I get your point. Cool, thanks for the insights.
Simon Frost
@sdwfrost
I'm trying to go back to basics by setting up a DiscreteProblem of an SDE solved by the Euler-Maruyama method, for which I'd like my function to have access to (a) the time step, dt, as well as (b) a random number generator of my choosing, but without using globals. I've tried various keyword arguments to the function that I pass to DiscreteProblem, but without success...any tips?
Simon Frost
@sdwfrost
Figured it out! I was being silly and forgot about passing lambdas...
Christopher Rackauckas
@ChrisRackauckas
@sdwfrost lambdas or pass that stuff as parameters
@andrew-matteson yeah that ; is Julia syntax for the difference between a default argument and a keyword argument
Baroff it's a false position method