Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 28 2016 09:12
    coveralls commented #28
  • Dec 28 2016 09:12
    coveralls commented #28
  • Dec 28 2016 09:12
    coveralls commented #28
  • Dec 28 2016 09:12
    coveralls commented #28
  • Dec 28 2016 09:12
    codecov-io commented #28
  • Dec 28 2016 09:07
    codecov-io commented #28
  • Dec 28 2016 09:05
    coveralls commented #28
  • Dec 28 2016 08:59
    ChrisRackauckas commented #28
  • Dec 28 2016 08:58
    ChrisRackauckas synchronize #28
  • Dec 28 2016 08:55
    ChrisRackauckas synchronize #28
  • Dec 28 2016 08:54
    coveralls commented #28
  • Dec 28 2016 08:54
    coveralls commented #28
  • Dec 28 2016 08:54
    coveralls commented #28
  • Dec 28 2016 08:54
    coveralls commented #28
  • Dec 28 2016 08:54
    codecov-io commented #28
  • Dec 28 2016 08:43
    ChrisRackauckas synchronize #28
  • Dec 22 2016 05:11
    ChrisRackauckas commented #28
  • Dec 12 2016 06:35
    coveralls commented #28
  • Dec 12 2016 06:35
    coveralls commented #28
  • Dec 12 2016 06:35
    codecov-io commented #28
Paweł Biernat
@pwl
looks like the error estimate from the RK method is the bottleneck here, the formulas from Hairer&Wanner require component-wise estimates.
they cannot be replicated in a purely vector space manner (i.e. without referring to the components)
Christopher Rackauckas
@ChrisRackauckas
FYI many algorithms run more efficiently when not doing the component-wise estimate in L2. L infinity is more sensitive to it.
At least that's what I've seen from ~10 ODEs and ~5 SDEs
Paweł Biernat
@pwl
can you rephrase your first comment?
Christopher Rackauckas
@ChrisRackauckas
So I purposely leave the length scaling off by default.
If your error estimate is in L2, for many problems using the standard L2 norm instead of the Hairer scaled-L2 norm ( sqrt(sum(squares)/length)) can be more efficient.
You can run a bunch of benchmarks yourself and see that it's not obviously better to scale it. Your tolerance will be slightly off for large (100x100) problems.
Paweł Biernat
@pwl
yeah, I don't see the obvious reason for the length scaling in Hairer
Christopher Rackauckas
@ChrisRackauckas
It's so that the tolerance lines up.
But not scaling works pretty well too since it's all local...
Paweł Biernat
@pwl
but this is not the issue I have. The problem is the L2 norm is not actually a norm because the weights depend on the components.
Christopher Rackauckas
@ChrisRackauckas
The L2 norm is a norm...?
ODE.jl's main thing is that your standard tolerances are really weird.
Paweł Biernat
@pwl
sorry, I mean that in Hairer they have a formula for the error that looks like an L2 norm but it is actually not a norm, because the weights in the sqrt(sum(squares/weights)) depend on the state of the solution.
Christopher Rackauckas
@ChrisRackauckas
Oh yeah
At each timepoint it's a norm though
Paweł Biernat
@pwl
no, because it is never linear iny
ok, in a sense you are correct
Christopher Rackauckas
@ChrisRackauckas
I don't see why it would matter though.
Paweł Biernat
@pwl
anyway, what did you mean by the default tolerances?
Christopher Rackauckas
@ChrisRackauckas
your reltol and abstol
Paweł Biernat
@pwl
what would you like them to be, and why?
Christopher Rackauckas
@ChrisRackauckas
usually they are like 10^-6 and 10^-3 or something like that.
I thought that's what MATLAB does, and I know DOPRI (/all of ODEInterface) does that.
Paweł Biernat
@pwl
well, I won't argue for our current choice, I always specify the tolerances by hand anyway
but what's the rationale behind a particular choice, apart from the fact that other solvers use them?
Christopher Rackauckas
@ChrisRackauckas
10^-3 means you're right in the first 3 digits.
Anything beyond that is usually hard to see, at least when plotted.
It's a nice default, and if people want higher percision, they probably should be using the default alogrithm (DP5) anyways.
Mauro
@mauro3
@pwl: sorry I have not had time to look at our work recently. And will not have much time at all until early September.
Paweł Biernat
@pwl
sure, no worries, I'm pretty pressed with other stuff as well and work on ODE.jl only occasionally.
Mauro
@mauro3
tnx
Christopher Rackauckas
@ChrisRackauckas
ODE.jl fails to load on v0.4.6 for my tests (on your dev branch). Is this expected?
LoadError: LoadError: LoadError: type Array has no field x
 in anonymous at /home/travis/.julia/v0.4/ODE/src/tests/test_cases.jl:68
 in call at /home/travis/.julia/v0.4/ODE/src/base.jl:84
 in include at ./boot.jl:261
 in include_from_node1 at ./loading.jl:320
 in include at ./boot.jl:261
 in include_from_node1 at ./loading.jl:320
 in require at ./loading.jl:259
 in eval at /home/travis/.julia/v0.4/DifferentialEquations/src/DifferentialEquations.jl:3
 in init_package at /home/travis/.julia/v0.4/DifferentialEquations/src/general/backends.jl:23
 in init_package at /home/travis/.julia/v0.4/DifferentialEquations/src/general/backends.jl:16
 in initialize_backend at /home/travis/.julia/v0.4/DifferentialEquations/src/general/backends.jl:5
 in solve at /home/travis/.julia/v0.4/DifferentialEquations/src/ode/ode_solve.jl:230
This started about 20 hours ago.
Paweł Biernat
@pwl
It's been a while since I run tests on 0.4, I can reproduce this.
The issue is the new test case involving a custom type
On the other hand we were thinking of dropping the support for 0.4, now that 0.5 is so close.
You could just ignore it until 0.5 is released, if you need this to work on 0.4 though I will take a look
Christopher Rackauckas
@ChrisRackauckas
Nah I just disabled the test
Paweł Biernat
@pwl
Cool!
less work for me:-)
Christopher Rackauckas
@ChrisRackauckas
Hey, do you guys know how to make Sundials.jl do adaptive timestepping?
Mauro
@mauro3
As far as I recall, Sundials always does adaptive stepping, but with the high-level interface you only ever see the dense output. With the low-level you get the adaptive steps and can request dense output.
Christopher Rackauckas
@ChrisRackauckas
Oh I see. I just want everywhere that it stepped in order to compare with the other solvers. Do you know how I'd do that? I don't want the dense output because I want to know each of its calculations (how many steps it took, etc.)
Mauro
@mauro3
Hmm... Maybe check what the values of this tout are: https://github.com/JuliaMath/Sundials.jl/blob/master/src/Sundials.jl#L323 Maybe those are the actual steps.
Christopher Rackauckas
@ChrisRackauckas
BTW, how is that GSoC student's project going?
Christopher Rackauckas
@ChrisRackauckas
Hey, I have consolidated all of the differential equations chatrooms to one room: https://gitter.im/JuliaDiffEq/Lobby . It's totally fine to keep this as a development chatroom, just wanted you to know that I'm pooling all JuliaDiffEq users to one place so you'll know where to give help.
Paweł Biernat
@pwl
@mauro3, how are you? any chance we could finish our work on ODE.jl any time soon?
Mauro
@mauro3
Tnx, I'm doing good. Although, my work-load has anything but lighted up. Thus the time I can spend on this is quite limited. Do you have a strategy? What's left to do?