These are chat archives for bluescarni/pagmo_reborn

10th
Jun 2016
Francesco Biscani
@bluescarni
Jun 10 2016 06:52
my body shut down yesterday. went to the couch at 8:30 after Ben went to sleep, put my head down, woke up this morning
Francesco Biscani
@bluescarni
Jun 10 2016 07:01
so there's a few things going on here
if you look at the decompose documentation, there's some bugs in the implementation:
https://europeanspaceagency.gitlab.io/PaGMOreborn/docs/problems/decompose.html
the has_hessians() and has_gradient() methods are still there, and that's no doxygen error
so it's not deleting anything actually
same for has_hessians()
so the first problem is that these two methods, which seemingly we want not to be inherited from problem, are present in decompose
the second thing is that, after decompose was written, I altered the problem to re-introduce the has_sparsity() override methods
you remember, because of that performance degradation related to the checking of the dense sparsities etc. etc.
so these "new" has_sparsity() methods are not deleted in decompose, and they are present there (see the doxygen documentation)
Francesco Biscani
@bluescarni
Jun 10 2016 07:06
it's not clear to me if we want them to be there or not, or maybe we want them to always return false or whatever
but it's yet something else that might contribute to the behaviour you are seeing
finally, the same issues might be present in translate, so it's worth to check it out
my first impression would be that it would be better to disable directly everything in decompose/translate, and that the dense implementations of the gradient that are there are not really needed
as the problem encapsulating decompose will provide those dense implementations
but that's just my 1st impression
Francesco Biscani
@bluescarni
Jun 10 2016 07:33
nned to catch the train, bbl
Dario Izzo
@darioizzo
Jun 10 2016 07:34
a dopo, leggo
Dario Izzo
@darioizzo
Jun 10 2016 07:51
So, I think your diagnosis is almost correct.
1 - YES, those methods were not deleted as they missed the const.
2 - YES, I added the delete for has_hessians_sparsity and has_gradient_sparsity
The above changes "fix" the problem and now the lines are called as expected.
3 - I do not think though we can get rid of the dense implementations of the gradients that are in decompose.hpp
Without he will use the default sparse implementation, but with 2 objectives, while now the problem is decomposed and has only 1.
In translate since sparsity is not changed we may inherit the methods so that we will see the final user-defined one ... (I guess)
Francesco Biscani
@bluescarni
Jun 10 2016 08:28
ah yeah I forgot that decompose changes the dimension
Dario Izzo
@darioizzo
Jun 10 2016 08:28
our coverage jumped to 97% overnight :)
codecov is really doing weird stuff
Francesco Biscani
@bluescarni
Jun 10 2016 08:29
yeah it's buggy
at least it gives some info though
Dario Izzo
@darioizzo
Jun 10 2016 08:30
the info is correct I hope, but the yellow, red and the percentages are good rngs
Francesco Biscani
@bluescarni
Jun 10 2016 08:30
I don't see much yellow at the moment
Dario Izzo
@darioizzo
Jun 10 2016 08:31
today not ...
but tomorrow?
Francesco Biscani
@bluescarni
Jun 10 2016 08:31
there's a bunch of throws in problem.hpp which are untested, so there's some yellow there
Dario Izzo
@darioizzo
Jun 10 2016 08:31
working on it
Francesco Biscani
@bluescarni
Jun 10 2016 08:32
which looks correctish, as only one branch it taken
alright
Dario Izzo
@darioizzo
Jun 10 2016 08:32
But i only see reds
Dario Izzo
@darioizzo
Jun 10 2016 08:32
yes, only reds
need a screenshot?
Francesco Biscani
@bluescarni
Jun 10 2016 08:33
I see zero yellows in the summary but yellow lines in the code below
Dario Izzo
@darioizzo
Jun 10 2016 08:33
tell me the exact line
Francesco Biscani
@bluescarni
Jun 10 2016 08:33
ahaha now it's only red
wtf
Dario Izzo
@darioizzo
Jun 10 2016 08:33
rngs told ya
Francesco Biscani
@bluescarni
Jun 10 2016 08:33
there must be something cached maybe
Dario Izzo
@darioizzo
Jun 10 2016 08:34
I think they may have "fixed" smething overnight ....
at least they are working on it
Francesco Biscani
@bluescarni
Jun 10 2016 08:34
I am not sure but I won't pursue this further :)
at least it helped us find a bug
Dario Izzo
@darioizzo
Jun 10 2016 08:34
The reds are correct so far ... anyway they help me to know where to look for problems
The test_coverage branch I created ... we can keep that opened continuously too ....
And whenever we have time fix there a few cov problems .... and merge now and then?
Francesco Biscani
@bluescarni
Jun 10 2016 08:36
I am not sure about this workflow... bad coverage should prevent merge of other branches
it should be treated as a bug
so if a merge request has bad coverage, it just does not get merged
Dario Izzo
@darioizzo
Jun 10 2016 08:37
But you said its unrealistic to ask for 100% cov
Francesco Biscani
@bluescarni
Jun 10 2016 08:37
yes, there should be exceptions.. but that does not mean that whatever happens it's ok
Dario Izzo
@darioizzo
Jun 10 2016 08:37
so what do you propose in terms of workflow?
Francesco Biscani
@bluescarni
Jun 10 2016 08:38
so as general rules:
  • the code coverage of the project should not decrease,
  • there should not be unchecked coverage in merge requests
with particular exceptions of course
Dario Izzo
@darioizzo
Jun 10 2016 08:38
I agree, but how to enforce this?
Francesco Biscani
@bluescarni
Jun 10 2016 08:39
this is harder to enforce here because apparently gitlab does not hook up yet to the gitlab MR system
so we need to do this manually
on codecov you can check the coverage on specific branches
there's also a section for pull requests but this does not work on gitlab: https://codecov.io/gl/EuropeanSpaceAgency/PaGMOreborn/pulls
so the bottom line is that as long as we are doing branches in our own repos, it's only slightly incovenient
Dario Izzo
@darioizzo
Jun 10 2016 08:40
so i can use codecov on my branch?
cool, now I can check if I fixed anything then :)
Francesco Biscani
@bluescarni
Jun 10 2016 08:40
but once (if ever) people start doing branches from their own clones of pagmo, we won't be able to see coverage unless people setup codecov on their own
you can already
codecov is enabled on all branches
lol it shows less coverage
probably tomorrow it will be better I guess
I have a feeling that some stuff gets updated daily
Dario Izzo
@darioizzo
Jun 10 2016 08:42
Check the coverage branch .. in decompose
what do you see?
Francesco Biscani
@bluescarni
Jun 10 2016 08:43
I see the methods are called
plus random yellow
no red
Dario Izzo
@darioizzo
Jun 10 2016 08:43
ok me too ...
confising though
only 89% cov
master has 9x !
Francesco Biscani
@bluescarni
Jun 10 2016 08:44
I bet they have a cronjob running on their server... "fix_stupid_bullshit.sh"
anyway I doubt many people are using it from codecov
from gitlab I mean
btw
is there a EuropeanSpaceAgency suer
user
or is it a group or what?
Dario Izzo
@darioizzo
Jun 10 2016 08:45
I think a group ...
why?
Francesco Biscani
@bluescarni
Jun 10 2016 08:45
because sometimes when I try to poke at the settings in codecov it tells me I gave no permissions
like if you try to create a new webhook
whatever that means
Dario Izzo
@darioizzo
Jun 10 2016 08:47
boh
I say let wait until it "stabilizes" then we lose time to see how to set it up correctly
Francesco Biscani
@bluescarni
Jun 10 2016 08:56
sure
now that we have the python docs as well I think we should reorganise the file structure of the sphinx doc
the logical structure you see from the html is ok I think
but the file structure of the .rst files is a bit all over the place
Dario Izzo
@darioizzo
Jun 10 2016 08:57
I agree
its growing wildly
Francesco Biscani
@bluescarni
Jun 10 2016 08:57
I think we just need to put the cpp stuff in a separate dir and we are good for the moment
https://europeanspaceagency.gitlab.io/PaGMOreborn/docs/problems/translate.html the translate looks good with respect to all the gradient/sparsity stuff?
Dario Izzo
@darioizzo
Jun 10 2016 09:17
I think so .... but did not really think it through ...
Francesco Biscani
@bluescarni
Jun 10 2016 09:35
this should probably be checked in the tests to see if it respects our expectations
Francesco Biscani
@bluescarni
Jun 10 2016 09:41
I updated the breathe version on the docker image, now the warnings are gone
this also fixed some doc bug, such as the default values of function params on the C++ side not showing up
Dario Izzo
@darioizzo
Jun 10 2016 09:42
@bluescarni What expectations do you want to test?
the sparsity stuff I guess its checked
right not in translate.
Francesco Biscani
@bluescarni
Jun 10 2016 09:43
I'd like to test the behaviour of decompose/translate with respect to a variety of scenarios. maybe it's done maybe not, but I got the impression it's not tested from your reply
example:
  • problem with gradient, without sparsity -> what do we expect here from translate/decompose?
  • problem with gradient and sparsity
Dario Izzo
@darioizzo
Jun 10 2016 09:45
you want to test decompose or the problem constructed from decompose?
Francesco Biscani
@bluescarni
Jun 10 2016 09:45
  • problem without gradient and with sparsity (does it make any sense?)
I want that decompose implements the methods of the problem interface in the expected way
I mean, if you tell me "I think so .... but did not really think it through ..." I am not sure what are we testing really :)
Dario Izzo
@darioizzo
Jun 10 2016 09:46
In decompose all is good. My uncertainty was on translate.
But again the uncertainty is not on the code, rather on what we actually expect from it :)
Francesco Biscani
@bluescarni
Jun 10 2016 09:47
well the philosophy of the meta problems as far as I have understood is that they just rely whatever is implemented in the original user-defined problem, modulo modifications that are meta-problem dependent
Dario Izzo
@darioizzo
Jun 10 2016 09:47
nice sentence
I bet in one year will make no sense to both of us :)
Francesco Biscani
@bluescarni
Jun 10 2016 09:48
:) but the fact that those issues we solved today arose, makes me think the testing was not really thorough as we did not catch these issues
Dario Izzo
@darioizzo
Jun 10 2016 09:48
correct
We are not testing the delete
I think that was the issue
Francesco Biscani
@bluescarni
Jun 10 2016 09:49
so I am more interested in learning how we can improve the process.. fixing a single bug is of course important, but what have we learned?
nothing probably :)
Dario Izzo
@darioizzo
Jun 10 2016 09:49
no I though along the same line ....
not sure what to do though ... I added a few tests that would catch this now, but I would like to have a systematic way of doing it for metas.
Truth is that each meta is different so also tests will be different
Francesco Biscani
@bluescarni
Jun 10 2016 09:51
I mean, for testing the deletes we can use the type traits
it's better than inspecting the doxygen output :)
Dario Izzo
@darioizzo
Jun 10 2016 09:51
but we do not have type traits for all of those do we?
Writing new ones is an overkill?
Francesco Biscani
@bluescarni
Jun 10 2016 09:52
I think we do for a lot of them, otherwise how would we detect them in the problem in the first place?
bool has_hessians() const = delete; this one we can detect for sure
Dario Izzo
@darioizzo
Jun 10 2016 09:52
unsigned long long get_fevals() const = delete;
    unsigned long long get_gevals() const = delete;
    unsigned long long get_hevals() const = delete;
I can add all those that we have
Francesco Biscani
@bluescarni
Jun 10 2016 09:53
the mindfuck is about all the sparsity/gradient sutff, I'd care about those first
Dario Izzo
@darioizzo
Jun 10 2016 09:53
the ones that are not there ... well I would not dare
true
Francesco Biscani
@bluescarni
Jun 10 2016 09:53
so let me put this way
what methods we expect in a decompose? and what values should they return? we know for instance we don't care of the original problem sparsity, it will be erased anyway
Dario Izzo
@darioizzo
Jun 10 2016 09:55
But that is why the test was there in the first place. Only it was not calling the functions we thought and by luck the inner ones were good as well
Francesco Biscani
@bluescarni
Jun 10 2016 09:56
ok that's good then
it was a sneaky one
Dario Izzo
@darioizzo
Jun 10 2016 09:56
I think the fact we did not check the delete made the bug go through
weird a compiler does not tell you that you are trying to delete a method that is not there
Francesco Biscani
@bluescarni
Jun 10 2016 09:57
I am just saying that if we say "boh" to the question "what is the expected behaviour?" maybe we should stop a moment and considering making this clear once and for all :)
Dario Izzo
@darioizzo
Jun 10 2016 09:57
yes yes ... :+1:
btw.. the problem is also that it was clear when it was made .... then time goes on and tend to forget ...
Francesco Biscani
@bluescarni
Jun 10 2016 09:59
yes I know, that's why we document stuff and put long explanatory comments on the tricky parts... :p
Dario Izzo
@darioizzo
Jun 10 2016 09:59
Let me fix the problem coverage, then I return to meta and add some tests on the overall logic (and docs update maybe)
Francesco Biscani
@bluescarni
Jun 10 2016 09:59
(or why we should)

ok... sorry for being a pain in the ass but this comment for instance:

    // deleting the hessians sparsity in necessary and sparsity will by default be dense
    bool has_hessians_sparsity() const = delete;

is not totally clear to me... it's not really necessary to delete this, you could also set it to always return true (and this will invoke the dense overloads in decompose). I like your solution better because it's less clutter in the methods list, but if I am coming back to this in 3 months I am not sure it will make any sense

Dario Izzo
@darioizzo
Jun 10 2016 10:02
agree
Francesco Biscani
@bluescarni
Jun 10 2016 10:02
so since this was a pain to debug and a "gotcha" moment when we figured out what was going wrong, I'd consider putting a small comment paragraph explaining the choices regarding deleting and overriding methods and the rationale for doing this
probably it will be still a pain to understand this in a month, but maybe less so
Dario Izzo
@darioizzo
Jun 10 2016 10:08
    // deleting has_gradient allows the automatic detection of gradients to see that decompose does not have any regardless
    // of whether the class its build from has them. A decompose problem will thus never have gradients
    bool has_gradient() const = delete;
    // delting has_gradient_sparsity allows the automatic detection of gradient_sparsisty to see that decompose does have an
    // implementation for it. The sparsity will thus always be dense (decompose::gradient_sparsity) and referred to
    // a problem with one objective
    bool has_gradient_sparsity() const = delete;
    // A decomposed problem does not have hessians (Tchebycheff is not differentiable)
    std::vector<vector_double> hessians(const vector_double &dv) const = delete;
    // delting has_hessians allows the automatic detection of hessians to see that decompose does not have any regardless
    // of whether the class its build from has them. A decompose problem will thus never have hessians
    bool has_hessians() const = delete;
    // delting has_hessians_sparsity allows the automatic detection of hessians_sparsisty to see that decompose does have an
    // implementation for it. The hessians_sparsisty will thus always be dense (decompose::hessians_sparsisty) and referred to
    // a problem with one objective
    bool has_hessians_sparsity() const = delete;
Francesco Biscani
@bluescarni
Jun 10 2016 10:09
yeah awesome
Is this still necessary after your mod?
Francesco Biscani
@bluescarni
Jun 10 2016 10:16
btw I was just noticing that decompose could in principle use the has_hessians() method.. you write that Tchebycheff is not differentiable but in principle you could decide to enable hessians only if Tchebycheff is not selected?
Dario Izzo
@darioizzo
Jun 10 2016 10:17
:) yes but I was lazy
student project?
Francesco Biscani
@bluescarni
Jun 10 2016 10:17
yeah no problem.. I just meant to say there's a use case for having the has_hessians() property determined at runtime :)
Dario Izzo
@darioizzo
Jun 10 2016 10:18
I mean who does multiobjective with gradients by decomposition?
Its a paper :)
Francesco Biscani
@bluescarni
Jun 10 2016 10:18
:)
Dario Izzo
@darioizzo
Jun 10 2016 10:18
can I remove those inner problem templates?
Francesco Biscani
@bluescarni
Jun 10 2016 10:18
I am not totally sure
I don't think they can be removed
Dario Izzo
@darioizzo
Jun 10 2016 10:19
I thought the logic is now that the sparsity is always there.
How do I trigger a call to the not implemented error then?
Francesco Biscani
@bluescarni
Jun 10 2016 10:20
I am not sure, but that template has to stay there.. because otherwise you always end up calling p.gradient_sparsity() which might not be there
remember that p is the user-defined problem
it's called from here
Dario Izzo
@darioizzo
Jun 10 2016 10:22
only if m_has_gradient_sparsity
Francesco Biscani
@bluescarni
Jun 10 2016 10:22
so if m_has_gradient_sparsity is true you could end up in that throw
yes
Dario Izzo
@darioizzo
Jun 10 2016 10:22
but if its true is there
so the other gets called
not the "you forgot to implement it moron"
Francesco Biscani
@bluescarni
Jun 10 2016 10:22
m_has_gradient_sparsity is detected possibly on the basis of has_gradient_sparsity()
perhaps you could implement a problem that gives overrides has_gradient_sparsity() to true and then not provide the gradietn_sparsity method?
Dario Izzo
@darioizzo
Jun 10 2016 10:23
So if I have a class with has_gradient_sparsity() and I do not put gradient sparisty
ok
Francesco Biscani
@bluescarni
Jun 10 2016 10:23
yeah it looks like that
Dario Izzo
@darioizzo
Jun 10 2016 10:23
what a mindfuck
on it
Francesco Biscani
@bluescarni
Jun 10 2016 10:24
yeah it's a mindfuck
daje
maybe the error message could be improved
in theory you could also trigger it from python if you manage to erase one method from the user-defined problem after you have constructed a problem with it
Dario Izzo
@darioizzo
Jun 10 2016 10:24
always
proposed text then?
Francesco Biscani
@bluescarni
Jun 10 2016 10:25
"gradient_sparsity() was signalled as present in the user-defined problem, but it is not actually implemented"
something along those lines?
"this indicates an error in the implementation of the user-defined problem"
"logical error"
I mean, for this to be triggered requires a user who is even aware of the has_gradient_sparsity() stuff and managing to fuck it up badly
Dario Izzo
@darioizzo
Jun 10 2016 11:35
ci sei?
Allora considera la seguente classe:
// We add a problem signalling gradient_sparsity() as present, but not implementing it
struct hhs_not_impl
{
    vector_double fitness(const vector_double &) const { return {1.,1.};}
    vector_double::size_type get_nobj() const { return 1u;}
    bool has_gradient_sparsity() const {return true;}
    bool has_hessians_sparsity() const {return true;}
    std::pair<vector_double, vector_double> get_bounds() const {return {{0.},{1.}};}
};
che implementa i metodi has_gradient_sparsity ma mente perche la gradient sparsity non c'e'
testiamo:
Francesco Biscani
@bluescarni
Jun 10 2016 11:37
ah but wait
ok tell me
Dario Izzo
@darioizzo
Jun 10 2016 11:39
    problem p6{hhs_not_impl{}};
    p6.gradient_sparsity();
you would expect this to throw, but it does not. Infact I dpo not see why it should.
Look at the lines:
Francesco Biscani
@bluescarni
Jun 10 2016 11:41
I think it should throw, if anything because the user signals the presence of a user-defined sparsity but does not provide one
and I cannot find any good reason for not considering this an error
Dario Izzo
@darioizzo
Jun 10 2016 11:41
    template <typename U, typename std::enable_if<pagmo::has_gradient_sparsity<U>::value,int>::type = 0>
    sparsity_pattern gradient_sparsity_impl(const U &p) const
    {
        return p.gradient_sparsity();
    }
    template <typename U, typename std::enable_if<!pagmo::has_gradient_sparsity<U>::value,int>::type = 0>
    sparsity_pattern gradient_sparsity_impl(const U &) const
    {
        pagmo_throw(std::logic_error,"gradient_sparsity() was signalled as present in the user-defined problem, but it is not actually implemented: this "
            "indicates a logical error in the implementation of the concrete problem class, as the 'gradient_sparsity()' "
            "method is accessed only if 'has_gradient_sparsity()' returns true.");
    }
    template <typename U, typename std::enable_if<pagmo::has_gradient_sparsity<U>::value && pagmo::override_has_gradient_sparsity<U>::value,int>::type = 0>
    static bool has_gradient_sparsity_impl(const U &p)
    {
        return p.has_gradient_sparsity();
    }
Francesco Biscani
@bluescarni
Jun 10 2016 11:42
regarding why it's because of the way the impl are implemented
    template <typename U, typename std::enable_if<pagmo::has_gradient<U>::value && pagmo::override_has_gradient<U>::value,int>::type = 0>
    static bool has_gradient_impl(const U &p)
    {
       return p.has_gradient();
    }
    template <typename U, typename std::enable_if<pagmo::has_gradient<U>::value && !pagmo::override_has_gradient<U>::value,int>::type = 0>
    static bool has_gradient_impl(const U &)
    {
       return true;
    }
    template <typename U, typename std::enable_if<!pagmo::has_gradient<U>::value,int>::type = 0>
    static bool has_gradient_impl(const U &)
    {
       return false;
    }
probably you meant this
ah wait no
Dario Izzo
@darioizzo
Jun 10 2016 11:42
no i meant what I posted
So you agree that at the moment is impossible to generate the error.
Francesco Biscani
@bluescarni
Jun 10 2016 11:43
I had seen this pattern that you initially implemented and meant to query about it but never gotten around: pagmo::has_gradient_sparsity<U>::value && pagmo::override_has_gradient_sparsity<U>::value
I care about what we think it should do more than what it does now
the pattern was there for the gradients initially
I just re-implemented as-is for the sparsity stuff for consistency of behaviour
Dario Izzo
@darioizzo
Jun 10 2016 11:44
First I need to understand what it doeas and why to then plan what it should do
So, I am understanding it correctly (as it is ) ?
Francesco Biscani
@bluescarni
Jun 10 2016 11:45
the basic idea which is wrong, IMO, is that it tests for the overrides only if the gradient/sparsity is implemented
I think so, but again, I'd care more if it does what we want it to do
Dario Izzo
@darioizzo
Jun 10 2016 11:46
Ok, so lets discuss what is that we would like him to do.
Francesco Biscani
@bluescarni
Jun 10 2016 11:46
ok.. I'd like this to error out:
struct hhs_not_impl
{
    vector_double fitness(const vector_double &) const { return {1.,1.};}
    vector_double::size_type get_nobj() const { return 1u;}
    bool has_gradient_sparsity() const {return true;}
    bool has_hessians_sparsity() const {return true;}
    std::pair<vector_double, vector_double> get_bounds() const {return {{0.},{1.}};}
};
Dario Izzo
@darioizzo
Jun 10 2016 11:46
We wnat to throw in the case above? Not silently provide a default?
Francesco Biscani
@bluescarni
Jun 10 2016 11:47
I want an error of some kind, not sure if in the ctor or when you call gradient_sparsity() or both
Dario Izzo
@darioizzo
Jun 10 2016 11:47
Because one could argue that if I say it has a sparsity!!! and provide the default dense one .. I am not wrong ...
Francesco Biscani
@bluescarni
Jun 10 2016 11:47
if you want the default sparsity, don't write anything
this much should be clear from the doc
you are supposed to re-implement the methods only if the default behaviour does not suit you, but meh
Dario Izzo
@darioizzo
Jun 10 2016 11:48
I do not see it as strongly, but OK, lets go for changing the behaviour and having it to throw
Then it suffices to change the logic in the checks of the impl methods...
brb
Francesco Biscani
@bluescarni
Jun 10 2016 11:52
I'd say so tentatively. but for consistency we should do it for all the has_ functionality, so including hessians and gradients
and also the set_seed() stuff
I fucking knew it that this would come back from the grave one day
anyway
Dario Izzo
@darioizzo
Jun 10 2016 11:56
back
so we want to check the four cases separately? true true, true false, false true, false false?
Francesco Biscani
@bluescarni
Jun 10 2016 12:01
not sure what you mean, what we have now is this:
    template <typename U, typename std::enable_if<pagmo::has_gradient_sparsity<U>::value && pagmo::override_has_gradient_sparsity<U>::value,int>::type = 0>
    static bool has_gradient_sparsity_impl(const U &p)
    {
        return p.has_gradient_sparsity();
    }
    template <typename U, typename std::enable_if<pagmo::has_gradient_sparsity<U>::value && !pagmo::override_has_gradient_sparsity<U>::value,int>::type = 0>
    static bool has_gradient_sparsity_impl(const U &)
    {
       return true;
    }
    template <typename U, typename std::enable_if<!pagmo::has_gradient_sparsity<U>::value,int>::type = 0>
    static bool has_gradient_sparsity_impl(const U &)
    {
       return false;
    }
I guess it should be 2 functions instead of 3:
    template <typename U, typename std::enable_if<pagmo::override_has_gradient_sparsity<U>::value,int>::type = 0>
    static bool has_gradient_sparsity_impl(const U &p)
    {
        return p.has_gradient_sparsity();
    }
    template <typename U, typename std::enable_if<!pagmo::override_has_gradient_sparsity<U>::value,int>::type = 0>
    static bool has_gradient_sparsity_impl(const U &)
    {
       return true;
    }
mhm wait
nvm
Dario Izzo
@darioizzo
Jun 10 2016 12:03
:)
Francesco Biscani
@bluescarni
Jun 10 2016 12:03
so I want the override to be called regardless of other conditions
so maybe it's still 3 methods, just with different conditiosn
    template <typename U, typename std::enable_if<pagmo::override_has_gradient_sparsity<U>::value,int>::type = 0>
    static bool has_gradient_sparsity_impl(const U &p)
    {
        return p.has_gradient_sparsity();
    }
    template <typename U, typename std::enable_if<pagmo::has_gradient_sparsity<U>::value && !pagmo::override_has_gradient_sparsity<U>::value,int>::type = 0>
    static bool has_gradient_sparsity_impl(const U &)
    {
       return true;
    }
    template <typename U, typename std::enable_if<!pagmo::has_gradient_sparsity<U>::value && !pagmo::override_has_gradient_sparsity<U>::value,int>::type = 0>
    static bool has_gradient_sparsity_impl(const U &)
    {
       return false;
    }
like this maybe
Dario Izzo
@darioizzo
Jun 10 2016 12:05
no throw?
Francesco Biscani
@bluescarni
Jun 10 2016 12:05
this is just for the detection of the has_gradient_sparsity()
basically:
  • if the UDP has has_gradient_sparsity(), use it otherwise
  • check if the gradient_sparsity() method exists
Dario Izzo
@darioizzo
Jun 10 2016 12:07
science coffee .... need to go, will be back in 40 mins
Francesco Biscani
@bluescarni
Jun 10 2016 12:07
I don't see any other changes necessary
Dario Izzo
@darioizzo
Jun 10 2016 12:44
back
Dario Izzo
@darioizzo
Jun 10 2016 13:04
Like this the error will be generated at construction. Are we ok with it?
Francesco Biscani
@bluescarni
Jun 10 2016 13:04
what error is generated at construction?
Dario Izzo
@darioizzo
Jun 10 2016 13:05
function: gradient_sparsity_impl
where: /home/dario/Documents/PaGMOreborn/tests/../include/problem.hpp, 275
what: gradient_sparsity() was signalled as present in the user-defined problem, but it is not actually implemented: this indicates a logical error in the implementation of the concrete problem class
Francesco Biscani
@bluescarni
Jun 10 2016 13:05
ah it's because we query the sparsity upon construction to store it in the data members right?
Dario Izzo
@darioizzo
Jun 10 2016 13:05
yep
concrete -> user defined
Francesco Biscani
@bluescarni
Jun 10 2016 13:06
yeah sure ok.. I mean, it's inevitable that if we ask for the sparsity an error will be raised... and better sooner than later
Dario Izzo
@darioizzo
Jun 10 2016 13:06
agreed
Francesco Biscani
@bluescarni
Jun 10 2016 13:52
I'll have to apply these modifications to the python side as well
and these changes should apply to gradients, hessians and seed as well
Dario Izzo
@darioizzo
Jun 10 2016 14:08
yep doing the seed
hessians already in
set_seed will throw upon call, not construction
Dario Izzo
@darioizzo
Jun 10 2016 14:52
done pushed
btw while trying to trigger the lines that check for possible overflow of nic+nec+nobj i managed to trigger bad::alloc
If you instantiate a problem with many objs or nec or nic, it will trigger a bad_alloc ..... :)
Just reporting as I guess @bluescarni is interested in these pathologies
many = std::numeric_limits<unsigned int>::max()
Dario Izzo
@darioizzo
Jun 10 2016 14:59
Testing the overflow lines is tricky in general. Can we a) remove them b) do not test them? @bluescarni
Francesco Biscani
@bluescarni
Jun 10 2016 19:24
I imagine at least some are testable. Those which are not should still stay