Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 28 2016 09:12
    coveralls commented #28
  • Dec 28 2016 09:12
    coveralls commented #28
  • Dec 28 2016 09:12
    coveralls commented #28
  • Dec 28 2016 09:12
    coveralls commented #28
  • Dec 28 2016 09:12
    codecov-io commented #28
  • Dec 28 2016 09:07
    codecov-io commented #28
  • Dec 28 2016 09:05
    coveralls commented #28
  • Dec 28 2016 08:59
    ChrisRackauckas commented #28
  • Dec 28 2016 08:58
    ChrisRackauckas synchronize #28
  • Dec 28 2016 08:55
    ChrisRackauckas synchronize #28
  • Dec 28 2016 08:54
    coveralls commented #28
  • Dec 28 2016 08:54
    coveralls commented #28
  • Dec 28 2016 08:54
    coveralls commented #28
  • Dec 28 2016 08:54
    coveralls commented #28
  • Dec 28 2016 08:54
    codecov-io commented #28
  • Dec 28 2016 08:43
    ChrisRackauckas synchronize #28
  • Dec 22 2016 05:11
    ChrisRackauckas commented #28
  • Dec 12 2016 06:35
    coveralls commented #28
  • Dec 12 2016 06:35
    coveralls commented #28
  • Dec 12 2016 06:35
    codecov-io commented #28
Paweł Biernat
@pwl
1) our empty loops are empty because they are simple examples. They are not actual use cases. On the inside of the loop you could have a plotting function to draw the solution at the given time, some write functions to save the results into a file etc. These are common use cases in my programs. It's kind of pointless to argue what the typical use case is, instead we should think on how to provide the most flexible interface.
2) regardless of the output format, this collect_transpose, or whatever you call it, is essentially an extremely simple wrapper around an iterator. It's still far from something that a name solve would suggest.
3) maybe we shouldn't fixate ourselves on a particular output type being the most useful, and instead provide an option to collect_transpose as to how to accumulate the output
4) I don't claim that collect_transpose is the right name for the function we need but it's closer to what it actually does then solve.
What about results(solve(...))?
Paweł Biernat
@pwl
Another idea: have iterate(ode;opts...) return an iterable object and have solve(ode;opts...) return a solution. This way you don't have to pass the return value of iterate to solve, and the names look pretty clear to me.
As for the output type I still would rather have the big matrix output as an option instead of the default (you could pass format options to solve).
by the way, are there any other use cases of having a huge matrix as an output then just plotting?
Paweł Biernat
@pwl
what if y0 is already a matrix? or even an an instance of AbstractArray? It could be pretty well a sparse matrix or even a custom iterable type.
Mauro
@mauro3
Yay!!! +∞ to iterate. Should I update #17 with iterate?
I think generally post-processing the data is much easier when in matrix form, e.g. average concentration at (x,y). Also, as benchmarked by yuyichao, vectors of vectors are often slower: https://github.com/JuliaLang/ODE.jl/issues/80#issuecomment-154064027 .
Mauro
@mauro3
I think the way to handle this is to return a matrix whenever y0<:AbstractVector. The output matrix is then created with similar(y0, length(y0), length(tout) or if number of output points is not known, then similar(y0,0); push!(...); reshape(...).
I would think that most people would want the matrix output, so I'd do the vec-vec as an option. Or is there a reason to want a vec-vec?
Paweł Biernat
@pwl
go ahead and update #17 (you could rebase it on dev if you have the time)
Paweł Biernat
@pwl

as for the output, let's say I have a custom type to hold points:

type Point<:AbstractVector
x;y
end
getindex(::Point)=...

there is no way you can push it into a Matrix without loosing its interpretation as a point.

Moreover, how do you imagine handling the matrix if you don't know it's size a priori? The only way you will know the size is to run the iteration, but then you still have to store the output at the same time. But if you are already storing the output (as a vector) then you would only loose on performance if you want to convert it in afterward (e.g. to a matrix).

In the context of what I wrote, it seems to me that having it as an optional conversion step would be fine. Although I still don't like the idea of having a function with different return types depending on an option.
Paweł Biernat
@pwl
Going back to #17, I would vote for having iterate return an IterableSolution instead of Problem (Problem only shows up in base.jl anyway, so changing it shouldn't affect anything).
Paweł Biernat
@pwl
and as far as the output goes, we might want to think ahead about the return type. What about having a specialized type to store a solution? We could consider f=solve(ode) giving an interpolable object with call overloaded so that f(t) => y(t) and f'(t) => dy(t). And with getindex defined so that f[i]=>y[i] or f[i]=>(t[i],y[i],dy[i]).
and then we could have yout(f) returning your Matrix or whatnot
just a thought!
Mauro
@mauro3

Yes, you're right, changing the return type depending on a keyword argument is bad.

Concerning your example, yes, there can be corner cases. The beauty of building on iterators, is that one can always drop down to them and do what one wants. Alternatively, we could only return a matrix when isa(y0, Vector), that should cover 95% of the use-cases.

When the number of output points is not known, then make a vector, push to that and reshape in the end. (Note that the reshape does not copy but just re-interprets the memory as matrix).

I still don't want to call something a solution which is not a solution at all but can only be used to, maybe, generate a solution.

Eventually having a solution type would be good (but in another PR). And having an option to store all the info to do dense output on the solution ex-post would be cool. However, it should be optional, as for big problems this may well be more info than one wants to store.
Paweł Biernat
@pwl
so, in preparation to implement having a custom solution type, we should have functions extracting the raw data out of the return type of solve, which, for now, could be whatever we feel like.
Mauro
@mauro3
gett, gety, getdy?
Paweł Biernat
@pwl
makes sense
Paweł Biernat
@pwl
getty to return (t,y)?
Paweł Biernat
@pwl
I would argue that if we are going to support custom user defined types (see my Point example) we can't treat them as a corner case. I would rather have the usual output from solve and a special solve_matrix for anyone who want's the matrix output.
Paweł Biernat
@pwl
I'm playing around with minimal type requirements and I was considering a type without getindex. In principle, we don't need the index for some solvers to work, we only need a type resembling an element of a (mathematical) vector space
so no need for length and getindex
of course it will work only for explicit solvers
any thoughts on that?
Paweł Biernat
@pwl
looks like the error estimate from the RK method is the bottleneck here, the formulas from Hairer&Wanner require component-wise estimates.
they cannot be replicated in a purely vector space manner (i.e. without referring to the components)
Christopher Rackauckas
@ChrisRackauckas
FYI many algorithms run more efficiently when not doing the component-wise estimate in L2. L infinity is more sensitive to it.
At least that's what I've seen from ~10 ODEs and ~5 SDEs
Paweł Biernat
@pwl
can you rephrase your first comment?
Christopher Rackauckas
@ChrisRackauckas
So I purposely leave the length scaling off by default.
If your error estimate is in L2, for many problems using the standard L2 norm instead of the Hairer scaled-L2 norm ( sqrt(sum(squares)/length)) can be more efficient.
You can run a bunch of benchmarks yourself and see that it's not obviously better to scale it. Your tolerance will be slightly off for large (100x100) problems.
Paweł Biernat
@pwl
yeah, I don't see the obvious reason for the length scaling in Hairer
Christopher Rackauckas
@ChrisRackauckas
It's so that the tolerance lines up.
But not scaling works pretty well too since it's all local...
Paweł Biernat
@pwl
but this is not the issue I have. The problem is the L2 norm is not actually a norm because the weights depend on the components.
Christopher Rackauckas
@ChrisRackauckas
The L2 norm is a norm...?
ODE.jl's main thing is that your standard tolerances are really weird.
Paweł Biernat
@pwl
sorry, I mean that in Hairer they have a formula for the error that looks like an L2 norm but it is actually not a norm, because the weights in the sqrt(sum(squares/weights)) depend on the state of the solution.
Christopher Rackauckas
@ChrisRackauckas
Oh yeah
At each timepoint it's a norm though
Paweł Biernat
@pwl
no, because it is never linear iny
ok, in a sense you are correct
Christopher Rackauckas
@ChrisRackauckas
I don't see why it would matter though.
Paweł Biernat
@pwl
anyway, what did you mean by the default tolerances?
Christopher Rackauckas
@ChrisRackauckas
your reltol and abstol