Version 1.1.0 released, and now my watch begins ...
Christopher Corley
@cscorley
hey! learning some optunity this weekend and i've ran into a question. i'm working on learning LDA parameters, but i
i'm stuck on how to give it a range of integers to check for the number of topics, and not use floats. any suggestions beyond casting the number with int()?
hey @cscorley sorry for the slow reply ... Optunity internally uses continuous optimizers, so at the moment the only option is to cast to int as you've been doing
adding explicit support for integers is on the to-do list currently, but admittedly not very high
if you have use cases where casting to int() is problematic for some reason we would be interested to hear about those!
Christopher Corley
@cscorley
OK! good to know.
the only foreseeable case i have is possibly wasting an evaluation on an integer that's previously been tried
Marc Claesen
@claesenm
Yeah, that's a good point. I'm playing with the idea of specifying an 'equivalence distance' (could be a callable) for each hyperparameter that would prevent that stuff from happening. This would probably be a fairly clean fix, and solves a variety of requests from our users in one go.
It should be straight-forward to support passage. What parameters would makes sense to tune there? The size of the recurrent layer?
Arch111
@Arch111
Hi there. How to tune (by using optunity.maximize_structured with search space) a sklearn ML model without any cross validation on specific x_train and x_test data (which means I should be able to determine which data set is for training and which one for testing) ? I would appreciate a simple and clear example. Thanks
directorscut82
@directorscut82
hi..i would like to ask if matlab wrapper supports python's aggregator "mean_and_list"? I cannot find a way to return the individual fold errors for further statistical processing (e.g. confidence interv etc.) [except maybe hacking global variables inside objective function or writing to files which is not very nice or fast;)]
Marc Claesen
@claesenm
@directorscut82 unfortunately, the matlab wrapper does not support the mean_and_list aggregator at this point
@Arch111 you can do this by making your own objective function, e.g. x_train = foo x_test = bar def create_objfun(your list of hyperpars): train with x_train predict with x_test return score
directorscut82
@directorscut82
@claesenm thanks for your answer (at least now i am sure there isnt anything else to try)
Marc Claesen
@claesenm
@directorscut82 note, though, that the aggregation of scores is done on the MATLAB side of things, so the necessary modifications in Optunity's MATLAB code are fairly minimal
directorscut82
@directorscut82
hello again.. is it possible to do stratisfied cross validation in matlab (similar proportions of samples from each class in every fold) ... i know there is strata but if i understand correctly you have to explicitly state the indices
directorscut82
@directorscut82
also, i believe python's strata_by_labels does this, but i cannot find it in matlab
also x2;) is strata = [[all_indices_class_1], [all_indices_class_2],....,....,[all_indices_class_N]] the 'manual' way for str.sampling in cross_validate ?
Arch111
@Arch111
@claesenm Hey thanks marc for the answer!
I have a question regarding how maximize or maximize_structured methods work and converge. Every time i run Maximize_structured with the same constraints and conditions, the method seem to return different optimized parameters
Arch111
@Arch111
What should be done to make it return the same optimized parameters (or at least close to the global optimum)? Because i observed that the returned "optimized" parameter sets tend to be dramatically different from each other which means they are not close to global optimum. Thanks
Arch111
@Arch111
How does optunity compare with other tools like hyperopt or spearmint? Any opinions or experiences ?
Thanks
Royi
@RoyiAvital
@Arch111 , I have no experience but from all the 3 you mentioned it seems that Optunity is the only one which supports MATLAB. So if that makes a difference, there you go...