These are chat archives for bluescarni/pagmo_reborn

25th
May 2016
Francesco Biscani
@bluescarni
May 25 2016 08:59
I re-enabled the extract method for the meta-problems, so we can now do:
In [1]: import pygmo

In [2]: class prob(object):
    def fitness(self,v):
        return [v[0]*v[0]]
    def get_bounds(self):
        return ([0],[1])
   ...:     

In [3]: p = pygmo.problem(pygmo.translate(prob(),[1]))

In [4]: p.extract(pygmo.translate)
Out[4]: <pygmo.core.translate at 0x7f3c41d47048>

In [5]: p.extract(pygmo.translate)
Out[5]: <pygmo.core.translate at 0x7f3c41d47158>

In [6]: p.extract(pygmo.translate).extract(prob)
Out[6]: <__main__.prob at 0x7f3c41d3cb38>
I think I'll have to write some python code that parses the Boost Python generated docstrings and replaces them with more Pythonic ones
the problem I am running into is that for some methods (like extract) I need to write a Python implementation that then calls some lower-level exposed C++ function
and the documentation of pure-python methods generated by sphinx is markedly different from the documentation of exposed C++ methods, so it does not look so nice
anyway, for now I'll focus on exposing stuff
but for the documentation we'll have to decide once and for all (and sooner rather than later) how to name things consistently... e.g., is it "concrete problem" or "user-defined problem"?
Marcus Märtens
@CoolRunning
May 25 2016 09:18
I have no preference, as long as it is consistent.
Dario Izzo
@darioizzo
May 25 2016 10:10
I would go for "pagmo problem" vs. "user problem"
Ok for the extract, I had deactivated it because I did not want to think about the consequences of allowing multiple extract .... (I see possible traps in feval counting for example), but OK lets allow it and see where it brings us :)
Francesco Biscani
@bluescarni
May 25 2016 10:13
but the extract allows only to access the underlying user problem, which does not expose fevals
Dario Izzo
@darioizzo
May 25 2016 10:13
if I chain translate with decompose for example ...
Francesco Biscani
@bluescarni
May 25 2016 10:14
I still don't see it
Dario Izzo
@darioizzo
May 25 2016 10:14
Does translate{ackley} have a feval counter?
Francesco Biscani
@bluescarni
May 25 2016 10:15
nope, those are still in the deleted list
Dario Izzo
@darioizzo
May 25 2016 10:15
the data members?
Francesco Biscani
@bluescarni
May 25 2016 10:15
the translate class does not have any method mentioning or relating to counters
if that is what you mean
Dario Izzo
@darioizzo
May 25 2016 10:16
No, I want to know if it has m_fevals as a data member.
Francesco Biscani
@bluescarni
May 25 2016 10:16
it's not an accessible data member I believe, as that is a private data member of the problem class and we are using public inheritance
but that is unrelated to the presence of extract
Dario Izzo
@darioizzo
May 25 2016 10:17
but it exists right? cannot be accessed but its there ... or not?
Francesco Biscani
@bluescarni
May 25 2016 10:18
it does exist, in the sense that when we call the fitness() member of a problem{translate{ackley}} there's two counters being increased
but only one of these counters can be accessed by the user
Dario Izzo
@darioizzo
May 25 2016 10:18
but thats in problem, not in translate
is there not a f_eval counter also in translate that is kept idle and unaccessible?
Francesco Biscani
@bluescarni
May 25 2016 10:19
there is a counter in problem, which you can inspect via problem's methods, and there is a counter in translate which is as-if it did not exist from the point of view of the user
or the developer
it's completely inaccessible
Dario Izzo
@darioizzo
May 25 2016 10:19
thats what I was saying .. OK
Francesco Biscani
@bluescarni
May 25 2016 10:19
barring weird undefined behaviour casting
but again that is unrelated to the extract method
extract does not make the counter visible or anything like it
Dario Izzo
@darioizzo
May 25 2016 10:21
problem p1(ackley(50));
problem p2{p1};
p1.fitness(x);
p2.fitness(x);
this is valid?
Francesco Biscani
@bluescarni
May 25 2016 10:22
it is, but p2 is a deep copy of p1, not a problem containing another problem
Dario Izzo
@darioizzo
May 25 2016 10:22
also with braces?
Francesco Biscani
@bluescarni
May 25 2016 10:22
yeah sure
problem should be used only as a terminator of the chain
Dario Izzo
@darioizzo
May 25 2016 10:22
because the template is deactivated right?
Francesco Biscani
@bluescarni
May 25 2016 10:22
yeps
at least that is the idea
barring bugs :)
Dario Izzo
@darioizzo
May 25 2016 10:23
and counters are only active in the end point
Francesco Biscani
@bluescarni
May 25 2016 10:23
in the case of meta-problems there's additional, invisible, counters being increased
Dario Izzo
@darioizzo
May 25 2016 10:24
are they increased though? how?
Francesco Biscani
@bluescarni
May 25 2016 10:24
because meta-problems derive from (but are not the same type as) a problem
Dario Izzo
@darioizzo
May 25 2016 10:25
but the method fitness is not inherited / called so the counters do not get incremented?
going for lunch ... later ...
Francesco Biscani
@bluescarni
May 25 2016 10:26
    /// Fitness of the translated problem
    vector_double fitness(const vector_double &x) const
    {
        vector_double x_deshifted = translate_back(x);
        return static_cast<const problem*>(this)->fitness(x_deshifted);
    }
again, it's an implementation detail that does not really matter
Francesco Biscani
@bluescarni
May 25 2016 10:42
I think I have found an instance where knowing whether the sparsity is user provided or not matters for performance
In [67]: time pygmo.problem(pygmo.rosenbrock(dim = 1000))
CPU times: user 0 ns, sys: 127 µs, total: 127 µs
Wall time: 133 µs
In [68]: time pygmo.problem(pygmo.translate(pygmo.rosenbrock(dim = 1000),a))
CPU times: user 0 ns, sys: 336 ms, total: 336 ms
Wall time: 335 ms
so the reason why this happens, I believe, is the following: in the non-translated case problem sees that rosenbrock does not provide a sparsity method, so it provides the automatic sparsity method (the dense one) but it does not run any verification on the returned sparsity upon construction (because it knows that the sparsity is the dense one and it does not need to be checked)
however, after we translate the rosenbrock, the translate problem always provides the sparsity method (with the sparsity computed as dense internally), but the problem class has no way of determining that the implemented sparsity is the automatically-generated dense one (rather than a real user-provided sparsity)
Francesco Biscani
@bluescarni
May 25 2016 10:49
so the problem constructed from translate does run a full check on the consistency of the returned sparsity upon construction, and that takes some time
so this is a debug build, and the timings in a release build are probably a fraction of that
but still, the performance disparity is quite striking, and it might matter if we start constructing thousands of problems
Dario Izzo
@darioizzo
May 25 2016 11:17
I do not see a way out .... but yes your analysis seems correct
Francesco Biscani
@bluescarni
May 25 2016 11:24
we just need to restore the has_sparsity() machinery
it's no different from the has_gradient() machinery, the only difference is that you provide a default implementation on top of that
(which we may eventually do as well with numerically calculated gradients)
Dario Izzo
@darioizzo
May 25 2016 11:27
I do not see it ...
Francesco Biscani
@bluescarni
May 25 2016 12:26
basically:
  • if the user-defined problem (UDP) has has_sparsity(), then call it upon problem construction. If it returns true then get m_has_sparsity is set to true, the sparsity is computed and checked, otherwise m_has_sparsity is false and no check is performed (sparsity assumed dense)
  • if the UDP does not have has_sparsity(), then detect the presence of the sparsity method: if there, m_has_sparsity is true and the sparsity is checked, otherwise m_has_sparsity is false and no check is performed (sparsity assumed dense)
it's really like the gradient detection is done, apart that it has a default implementation if the user does not provide it
so, if translate is constructed with a UDP which does have sparsity, then its has_sparsity() method returns true, otherwise it will return false
Dario Izzo
@darioizzo
May 25 2016 12:28
ok ...
Francesco Biscani
@bluescarni
May 25 2016 12:28
maybe at the problem level we want to call it has_user_defined_sparsity()
Dario Izzo
@darioizzo
May 25 2016 12:28
help me here:
                else if (m_strategy == 2u) { /* strategy DE1 */
                    tmp = popold[i];
                    auto n = rand_ind_idx(m_e);
                    auto L = 0u;
                    do {
                        tmp[n] = popold[r[0]][n] + m_F * (popold[r[1]][n] - popold[r[2]][n]);
                        n = (n + 1u) % dim;
                        ++L;
                    } while ((drng(m_e) < m_CR) && (L < dim));
Francesco Biscani
@bluescarni
May 25 2016 12:28
just to make it clear that the sparsity is always there
Dario Izzo
@darioizzo
May 25 2016 12:28
whats wrong with the above loop?
Francesco Biscani
@bluescarni
May 25 2016 12:29
hard to say.. what is the issue?
Dario Izzo
@darioizzo
May 25 2016 12:30
==10692== valgrind: Unrecognised instruction at address 0x53bead5.
==10692==    at 0x53BEAD5: std::(anonymous namespace)::__x86_rdrand() (random.cc:69)
==10692==    by 0x53BEC71: std::random_device::_M_getval() (random.cc:130)
==10692==    by 0x437CCA: std::random_device::operator()() (random.h:1619)
==10692==    by 0x436643: __static_initialization_and_destruction_0(int, int) (rng.hpp:25)
==10692==    by 0x436C16: _GLOBAL__sub_I__Z14init_unit_testv (de.cpp:27)
==10692==    by 0x4C9A8C: __libc_csu_init (in /home/dario/Documents/PaGMOreborn/build/tests/de)
==10692==    by 0x5BC469E: (below main) (in /usr/lib/libc-2.23.so)
==10692== Your program just tried to execute an instruction that Valgrind
==10692== did not recognise.  There are two possible reasons for this.
soemthing with the drng(m_e)?
Francesco Biscani
@bluescarni
May 25 2016 12:30
does valgrind run ok with other similar tests?
Dario Izzo
@darioizzo
May 25 2016 12:30
good question
let me check
Francesco Biscani
@bluescarni
May 25 2016 12:31
so my guess is: Unrecognised instruction at address 0x53bead5
std::(anonymous namespace)::__x86_rdrand()
so the way valgrind works is that it is like a small virtual machine, that translate binary instructions into its own language
Dario Izzo
@darioizzo
May 25 2016 12:31
no actualy it fails on other tests too
Francesco Biscani
@bluescarni
May 25 2016 12:32
it looks like the C++ standard library is using some low-level instructions
which valgrind does not know how to translate in its onw machine language
*own
I had similar errors in the past when new Intel processors started to come out
Dario Izzo
@darioizzo
May 25 2016 12:33
so not out fault ... we do not care right? but cannot use valgrind with our rng?
Francesco Biscani
@bluescarni
May 25 2016 12:33
the compiled code would contain AVX instructions which were not understood by the current version of valgrind
are you using any special flag for the compilation?
like march or similar?
Dario Izzo
@darioizzo
May 25 2016 12:33
I compile with YACMA
Francesco Biscani
@bluescarni
May 25 2016 12:34
in release mode or debug?
Dario Izzo
@darioizzo
May 25 2016 12:34
debug
Francesco Biscani
@bluescarni
May 25 2016 12:34
and is this clang or gcc?
Dario Izzo
@darioizzo
May 25 2016 12:34
gcc 5.3.0
valgrind version?
ok it looks like the fix was pushed to the valgrind source after the latest release was published
the latest release was in sept 2015
Dario Izzo
@darioizzo
May 25 2016 12:36
3.11.0
Francesco Biscani
@bluescarni
May 25 2016 12:36
the bug report above is later
so for now we cannot use valgrind apparently
use -fsanitize=address
Dario Izzo
@darioizzo
May 25 2016 12:36
good to know
Francesco Biscani
@bluescarni
May 25 2016 12:36
instead
set it in the CXXFLAGS from ccmake
Dario Izzo
@darioizzo
May 25 2016 12:37
k
Francesco Biscani
@bluescarni
May 25 2016 12:37
we should probably enable that when the tests are run as well
in the CI I mean
Dario Izzo
@darioizzo
May 25 2016 12:38
what doeas it do?
Francesco Biscani
@bluescarni
May 25 2016 12:38
it's basically like valgrind, but it has a slowdown of 2x instead of 30x
it checks memory accesses
Dario Izzo
@darioizzo
May 25 2016 12:39
and the output where is it?
Francesco Biscani
@bluescarni
May 25 2016 12:39
if everything is fine your program executes normally without any extra output
if there are problems it complains loudly and prints colorful messages
and returns a non-zero exit status, so the test actually fails from the point of view of make test
Dario Izzo
@darioizzo
May 25 2016 12:41
so if it compiles I am good?
I mean execute?
Francesco Biscani
@bluescarni
May 25 2016 12:42
yes execute
Francesco Biscani
@bluescarni
May 25 2016 13:15
the exposition is coming along nicely:
In [7]: p = pygmo.problem(pygmo.hock_schittkowsky_71())

In [8]: p.gradient_sparsity()
Out[8]: 
array([[0, 0],
       [0, 1],
       [0, 2],
       [0, 3],
       [1, 0],
       [1, 1],
       [1, 2],
       [1, 3],
       [2, 0],
       [2, 1],
       [2, 2],
       [2, 3]])

In [9]: p.hessians([1,2,3,4])
Out[9]: 
[array([ 8.,  4.,  4.,  7.,  1.,  1.]),
 array([ 2.,  2.,  2.,  2.]),
 array([-12.,  -8.,  -4.,  -6.,  -3.,  -2.])]
there's some extra copying around due to the fact that we need to convert often between std::vector<double> and numpy arrays
Dario Izzo
@darioizzo
May 25 2016 13:17
but lists are still supported right?
Francesco Biscani
@bluescarni
May 25 2016 13:17
if std::vector allowed to extract the pointer to the data and "steal" it, it could be done with zero copying
yes, but all outputs are done to NumPy arrays
Dario Izzo
@darioizzo
May 25 2016 13:17
why?
Francesco Biscani
@bluescarni
May 25 2016 13:17
(see how the hessians is called [1,2,3,4])
Dario Izzo
@darioizzo
May 25 2016 13:18
I mean why the output default is array?
Francesco Biscani
@bluescarni
May 25 2016 13:18
because that's the standard for numerical data in Python
Dario Izzo
@darioizzo
May 25 2016 13:18
I hate it ... but OK if its standard
Francesco Biscani
@bluescarni
May 25 2016 13:18
basically you can input data via lists or array, but you always get out arrays
Dario Izzo
@darioizzo
May 25 2016 13:18
I would have done the opposite, always out lists ...
I just do not like the extra text to represent numbers
Francesco Biscani
@bluescarni
May 25 2016 13:19
lists are embarassingly slow, like orders of magnitude slower
Dario Izzo
@darioizzo
May 25 2016 13:20
not in our case .. they are only used to represent out data, the computations are done anyway efficiently right?
Francesco Biscani
@bluescarni
May 25 2016 13:20
but they are handy to input data interactively
you still have to convert a std::vector<double> to something pythonic
if it's a list, the conversion is going to be mcuh slower
Dario Izzo
@darioizzo
May 25 2016 13:20
in bothe cases ...
Francesco Biscani
@bluescarni
May 25 2016 13:21
no, the numpy array api allows basically to do a memcpy() of the raw array
Dario Izzo
@darioizzo
May 25 2016 13:21
btw, if lists are embarassingly slow, what does this tell us on python developers?
Francesco Biscani
@bluescarni
May 25 2016 13:21
with lists you have to keep on pushing back, plus all the overhead of the interpreter in dealing with the runtime identification of the type you are pushing in
Dario Izzo
@darioizzo
May 25 2016 13:21
why do they not just implement them faster?
Francesco Biscani
@bluescarni
May 25 2016 13:21
because lists need to be able to represent collection of heterogeneous data
Dario Izzo
@darioizzo
May 25 2016 13:21
ah right
then I would have list([2,3,4]) and [2,3,4]
for list and array
not [2,3,4] and array([2,3,4])
but thats me
Francesco Biscani
@bluescarni
May 25 2016 13:22
for the screen output you mean?
Dario Izzo
@darioizzo
May 25 2016 13:22
yes
Francesco Biscani
@bluescarni
May 25 2016 13:23
oh well you can always monkey patch the array repr() :)
Dario Izzo
@darioizzo
May 25 2016 13:23
considering it
just to throw off the user
Francesco Biscani
@bluescarni
May 25 2016 13:23
// Convert a vector of doubles into a numpy array.
inline bp::object vd_to_a(const pagmo::vector_double &v)
{
    // The dimensions of the array to be created.
    npy_intp dims[] = {boost::numeric_cast<npy_intp>(v.size())};
    // Attempt creating the array.
    PyObject *ret = PyArray_SimpleNew(1,dims,NPY_DOUBLE);
    if (!ret) {
        pygmo_throw(PyExc_RuntimeError,"couldn't create a NumPy array: the 'PyArray_SimpleNew()' function failed");
    }
    if (v.size()) {
        // Copy over the data.
        std::copy(v.begin(),v.end(),static_cast<double *>(PyArray_DATA((PyArrayObject *)(ret))));
    }
    // Hand over to boost python.
    return bp::object(bp::handle<>(ret));
that's the complete code to convert a vector of double to a numpy array
PyArray_SimpleNew(1,dims,NPY_DOUBLE); this creates the unitialised memory area
std::copy(v.begin(),v.end(),static_cast<double *>(PyArray_DATA((PyArrayObject *)(ret))));
this copies the data in
Dario Izzo
@darioizzo
May 25 2016 13:24
nicely written
pygmo_throw is differnet from pagmo_throw?
Francesco Biscani
@bluescarni
May 25 2016 13:25
it's similar in usage, but a bit different in implementation
it uses the Python mechanism for exception throwing
Dario Izzo
@darioizzo
May 25 2016 13:25
ah ecco ..
Francesco Biscani
@bluescarni
May 25 2016 13:26
I use it there because then I can set manually the exception type directly
without relying on Boost.Python on how to convert C++ exceptions to Python exceptions
Dario Izzo
@darioizzo
May 25 2016 13:27
can you also log the parameter that meke PyArray_SimpleNew fail?
i mean in the error message?
Francesco Biscani
@bluescarni
May 25 2016 13:27
I do in other cases, but there I would not know what to put
I think that function can fail only if the memory allocation fails
but I am not sure, the NumPy documentation does not say much
Dario Izzo
@darioizzo
May 25 2016 13:28
couldn't create a NumPy array: the 'PyArray_SimpleNew(a,b,c)' function failed with a=1, b=$dims, c= $NPY_DOUBLE
k
just being pedantic
Francesco Biscani
@bluescarni
May 25 2016 13:29
mhmh NPY_DOUBLE is a macro expanding to a meaningless integral value, but I can do that
I don't expect to ever see that message
it could not arise from a user error I believe
Dario Izzo
@darioizzo
May 25 2016 13:29
:) well then lets remove it
Francesco Biscani
@bluescarni
May 25 2016 13:29
the error message you mean?
Dario Izzo
@darioizzo
May 25 2016 13:29
just kidding
Francesco Biscani
@bluescarni
May 25 2016 13:29
lol
so I still need to use the list when returning yessians
it's a list of arrays
Dario Izzo
@darioizzo
May 25 2016 13:30
right
Francesco Biscani
@bluescarni
May 25 2016 13:30
I could not use a 2d array because the hessians have different length in principle
Dario Izzo
@darioizzo
May 25 2016 13:30
does not numpy has a matrix class?
ah right
Francesco Biscani
@bluescarni
May 25 2016 13:31
the array is multidimensional, but this is a jagged matrix
Dario Izzo
@darioizzo
May 25 2016 13:31
the sparse representation thingy
Francesco Biscani
@bluescarni
May 25 2016 13:31
yep
what people sometime is to return another array which contains the lengths of the hessians concatenated one after the other
that is, you return 2 arrays:
1 array with the hessians one after the other
Dario Izzo
@darioizzo
May 25 2016 13:32
where did you see this?
I mean "people" who?
Francesco Biscani
@bluescarni
May 25 2016 13:32
1 array of ints in which you store the initial index of each hessian
it's the way fortran people deal with the deficiency of their language :)
instead of using proper sparse data structures
Dario Izzo
@darioizzo
May 25 2016 13:32
those are not "people" right?
Francesco Biscani
@bluescarni
May 25 2016 13:32
they are subhumans, I agree
Dario Izzo
@darioizzo
May 25 2016 13:33
"fortranimals"
Francesco Biscani
@bluescarni
May 25 2016 13:33
:)
Dario Izzo
@darioizzo
May 25 2016 13:33
would you say our data structures are proper?
Francesco Biscani
@bluescarni
May 25 2016 13:34
I think they work well in C++, in Python there's the general slowness of dealing with a list in this case... but I am not sure how much it matters overall
Dario Izzo
@darioizzo
May 25 2016 13:34
These things are called only twice 1) at construction 2) at the beginning of some evolve
So, overall I do not expect it to be a problem
Francesco Biscani
@bluescarni
May 25 2016 13:35
right ok
Dario Izzo
@darioizzo
May 25 2016 13:35
"le ultime parole famose" :)
Francesco Biscani
@bluescarni
May 25 2016 13:39
it's a bummer though that you cannot "extract" the data as a C array from an std::vector
If I could do:
std::vector<double> v(100);
double *p = v.steal_pointer();
assert(v.size() == 0);
then we could have a zero-copy interface from Python
Dario Izzo
@darioizzo
May 25 2016 13:41
but you do have the data pointer no?
why do you want to empty v?
Francesco Biscani
@bluescarni
May 25 2016 13:42
yes but I need v to cede me the ownership of that pointer
Dario Izzo
@darioizzo
May 25 2016 13:42
otherwise?
Francesco Biscani
@bluescarni
May 25 2016 13:42
because otherwise when v is destructed I also lose the data in there
Dario Izzo
@darioizzo
May 25 2016 13:42
ah right
Francesco Biscani
@bluescarni
May 25 2016 13:42
v's destructor will free the memory
Dario Izzo
@darioizzo
May 25 2016 13:43
its one of those cases where the purpose of an std::vector is beaten ....
Francesco Biscani
@bluescarni
May 25 2016 13:43
true, but I think the reason is that you cannot possibly know how to delete that pointer p you would extract
with std::vector you can provide a custom allocator
Dario Izzo
@darioizzo
May 25 2016 14:50
   Gen:        Fevals:          Best:            dx:            df:
      1             20        23808.2        21.4817         262524
    101           2020        10.2685        4.11667        43.4044
    201           4020        6.64312       0.715138        1.31039
    301           6020        6.19674       0.802406         1.0907
    401           8020        5.71151       0.771571       0.713332
    501          10020        5.48576        0.36169       0.529762
    601          12020        5.15336       0.491558       0.560896
    701          14020        4.77696       0.773733       0.795316
    801          16020        4.43599        1.03855        1.13628
    901          18020        4.00437        1.01982       0.955167
   1001          20020        3.81414       0.937409       0.885382
   1101          22020        3.43481       0.702748       0.653919
   1201          24020        3.34478       0.645534       0.671636
   1301          26020        3.06973       0.673379       0.742006
   1401          28020        2.70379       0.858337       0.812823
   1501          30020        2.43196       0.925882       0.837407
   1601          32020        2.27288       0.992615       0.996492
   1701          34020        2.01591        1.20559        1.10874
   1801          36020        1.83995        0.96881       0.948736
   1901          38020        1.66265        1.22214        1.12603
   2001          40020        1.34316        1.19912        1.09941
   2101          42020        1.18253       0.730219       0.590159
   2201          44020       0.861519       0.707184       0.546842
   2301          46020        0.73895       0.769614       0.569827
   2401          48020       0.664918        1.05982       0.643856
   2501          50020       0.540472       0.680446       0.416753
   2601          52020       0.433869       0.997717       0.523356
   2701          54020       0.385972       0.959368       0.561482
   2801          56020       0.230782       0.792881       0.333384
   2901          58020       0.168633        0.99462       0.374793
   3001          60020       0.150408       0.779557       0.328375
   3101          62020       0.114849       0.992172       0.325656
   3201          64020      0.0955015       0.769893       0.219042
   3301          66020      0.0740114       0.707261       0.197989
   3401          68020      0.0508679       0.825148         0.2196
   3501          70020      0.0120906        1.31759       0.258122
   3601          72020     0.00829018       0.722843      0.0767988
   3701          74020     0.00694011       0.586273      0.0567462
   3801          76020     0.00472087       0.471222      0.0420226
   3901          78020     0.00396957       0.499797       0.026791
   4001          80020     0.00166044       0.422422      0.0286103
   4101          82020    0.000577068       0.387215        0.01767
   4201          84020    0.000285343       0.262523     0.00976312
   4301          86020    0.000158624       0.225524     0.00560703
   4401          88020    0.000153541       0.225542     0.00561211
   4501          90020    9.85878e-05       0.225735     0.00566693
   4601          92020    7.96299e-05       0.126399     0.00168879
   4701          94020    3.23695e-05      0.0990953    0.000971981
   4801          96020    1.47372e-05      0.0544535    0.000307859
   4901          98020    1.25364e-05      0.0545121    0.000309982

   Gen:        Fevals:          Best:            dx:            df:
   5001         100020    6.93499e-06      0.0404227    0.000193404
   5101         102020    3.42974e-06      0.0423394    0.000196466
   5201         104020    2.62663e-06     0.00924184    2.73367e-05
   5301         106020    2.05185e-06        0.01126    2.28739e-05
   5401         108020    1.19462e-06      0.0122254    2.15072e-05
   5501         110020    3.00655e-07      0.0144862    2.23868e-05
   5601         112020    2.84273e-07      0.0144956    2.24032e-05
   5701         114020    1.44747e-07      0.0146547    2.22424e-05
   5801         116020    1.44738e-07      0.0146545    2.22424e-05
   5901         118020    7.41046e-08      0.0152854     2.2313e-05
   6001         120020    4.84686e-08      0.0152999    2.23333e-05
   6101         122020    3.83983e-08      0.0153134    2.23434e-05
   6201         124020    1.40618e-08     0.00333834    1.23984e-06
   6301         126020    9.47149e-09     0.00354061    1.24433e-06
Exit condition -- ftol < 1e-06
Our first problem solved by PaGMO reborn :)
A very challenging rosenbrock{10u}
Francesco Biscani
@bluescarni
May 25 2016 14:53
YAY! :clap:
Marcus Märtens
@CoolRunning
May 25 2016 14:54
:+1:
Awesome
Francesco Biscani
@bluescarni
May 25 2016 15:23
I think you forgot to commit the test file?
Dario Izzo
@darioizzo
May 25 2016 15:24
as usual :)
done
tests are still missing, I am using the test file as a main as I develop
Francesco Biscani
@bluescarni
May 25 2016 15:26
sure
Dario Izzo
@darioizzo
May 25 2016 15:28
This is one of those algorithms where we extract with get_x() and get_f() at the beginning and we operate on those. Still, instead of waiting the end to update the copied pop I am doing it on the way as to be able to use best_idx and worst_idx methods of the population. This does not create extra fevals as per the use of set_xf()
Francesco Biscani
@bluescarni
May 25 2016 15:36
was it a big effort to do the porting?
Dario Izzo
@darioizzo
May 25 2016 15:37
for this particular one, no. We have all the functionalities in place. its only slightly tricky for the log and stopping criterias.
Francesco Biscani
@bluescarni
May 25 2016 15:37
nice
Dario Izzo
@darioizzo
May 25 2016 15:38
Also coding style is different and the standard is higher ... but I guess that is a good thing
                /*-----We select at random 5 indexes from the population---------------------------------*/
                std::vector<vector_double::size_type> idxs(NP);
                std::iota(idxs.begin(), idxs.end(), 0u);
                for (auto j = 0u; j < 5u; ++j) { // Durstenfeld's algorithm to select 5 indexes at random
                    auto idx = std::uniform_int_distribution<vector_double::size_type>(0u, NP - 1u - j)(m_e);
                    r[j] = idxs[idx];
                    std::swap(idxs[idx], idxs[NP - 1u - j]);
                }
this is the only algoritmically different part I implemented ex novo
To get 5 random indexes from a vector(without repetitiom), if needed elswhere could be done nicely in generic programming style (std like) in generic utils
Francesco Biscani
@bluescarni
May 25 2016 15:41
std::iota(idxs.begin(), idxs.end(), 0u); this should be inited with vector_double::size_type
Dario Izzo
@darioizzo
May 25 2016 15:41
:)
true
Francesco Biscani
@bluescarni
May 25 2016 15:41
pedantic!
Dario Izzo
@darioizzo
May 25 2016 15:44
This message was deleted
actually, sorry:
template< class RandomIt, class OutputIt, class RandomFunc >
void random_elements( InputRandomIt first, InputRandomIt last, OutputRandomIt d_first, OutputRandomIt d_last, RandomFunc& r );
Dario Izzo
@darioizzo
May 25 2016 15:51
and then I could write the snippet above as:
std::vector<vector_double::size_type> idxs(NP);
std::iota(idxs.begin(), idxs.end(), 0u);
random_elements(idxs.begin(),idxs.end(),r.begin(),r.begin()+5, m_e);
going home ... later :)