Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Tim Head
@betatim
which is not super useful but yeah. can you make a toy example that evaluates more quickly but is similar in the shape of the PDF and see what happens?
smutch
@smutch
:) Creating a smaller test problem might be tricky, but now that I know that the methodology should scale to these numbers of parameters this is definitely something I will do.
Thanks again for your help and for sharing & maintaining such a great resource!
Tim Head
@betatim
yeah it is tricky, though debugging in 10D is also tricky because you can't display anything :)
we recently discussed the partial dependence plots (from plots.py) and how it can look like the minimum is not where you think it is https://github.com/scikit-optimize/scikit-optimize/issues/626#issuecomment-363197850
cooking up the final example was "fun"
smutch
@smutch
I was seeing similar results when I tried to plot up my 10D case so that's also good to know. 🙂
Christian Schell
@cschell
hey, quick question about the x0 and y0 parameters for gp_minimize: do I understand it correctly that I can use these parameters to feed the function results I already know?
Tim Head
@betatim
yes
@cschell
or values you have from a previous run
if you only provide x0 the objective will be evaluated at those points so it is a way of saying "I know some good values you should try at the start"
Christian Schell
@cschell
okay, so combined with callbacks I can basically save the current state after each iteration and if, for some reason, the process gets interrupted I can continue the search that way at a later time – right?
Tim Head
@betatim
yes
if you want to write a generic-ish Checkpoint callback that would be cool to contribute
Christian Schell
@cschell
@betatim sure, like this? scikit-optimize/scikit-optimize#665
Tim Head
@betatim
@cschell taking a look now. Sorry for the glacial reaction times. Apparently "everything" is happening at the moment :)
Kejia (KJ) Shi
@kejiashi
@iaroslav-ai For your benchmarks, do you think it would be helpful to include algorithms from the RoBO package? They have DNGO, another Bayesian net algorithm and the multi-task one. Downside is for the deep ones, running time will increase and the performance is almost known to be worse than other algorithms on synthetic functions.
qja
@qja
Hi,
What would be the best way to implement a quantised search space ? For example from 0.0 to 2.0 by 0.05 increments . I obviously can just setup an integer search space adn then do all the logic to go from integer to quantised real in my blackbox function
but it might be better for plots / exploratio to do it as a "special" dimension
Tim Head
@betatim
i'd use a categorical space with the values you want as "categories"
qja
@qja
Hi Tim, thanks that could definitely work but in my mind categorical implies no ordering between categories whereas a Quantized space has an ordering and i'm guessing it could help the underlying models to have an order
should i just do a QReal that herits from Dimension and set _rvs to an integer rand and then implement transform and inverse_trasnform as a trnasformer ?
Tim Head
@betatim
That is a good point
Trying to think what the best way is to use np.linspace, arange and friends to generate these quantized dimensions
so maybe a new dimension type that is like those would be good
so not quantized but "a range with a fixed number of points in it"
which is basically the same but would cover "ten points between 0 and 10" as well as "ten points between 0 and 3.141" etc
does that make sense?
qja
@qja
That does make sense , I'll just go that way and make a new dimension subtype .
Tim Head
@betatim
cool
and I will go outdoors and take advantage of spring sunshine :sun_with_face:
qja
@qja
just to make sure i understand what transform and inverse_transform do
if i say want to make a Dimension of 10 points between 0 and 3.14
inverse_trasnform should return reals between 0 and 3.14
and transform should return Integers between 0 and 9 for example
Tim Head
@betatim
transform and inverse_transform go between what the user thinks the representation of the dimension is and the representation that optimizers like
a bit like for a neural network you want to scale all your input features so they have a similar range
we normalise all ranges to between 0 and 1 when the Gaussian process or random forest see it
as an example
(and then you can play some clever tricks while doing these transformations but really all they are about is going from human friendly to machine friendly and back)
if you had ten points between 0 and 3.141 i'd apply the same transform as the regular Real does
qja
@qja
ok
thanks for your help
Tim Head
@betatim
@qja for my curiosity: what is your use-case where you prefer having quantised steps like this over uniformly sampling along that dimension?
qja
@qja
my blackbox function i believe shouldn't be too sensitive to minute changes, has quite many dimensions and i don't want to spend evaluation time checking if 0.5567891 is better than 0.55 in one dimension would rather spend evaluation spanning more of the search sapce
Tim Head
@betatim
k
Supun Abeysinghe
@smb564
Hi, I am trying to use Bayesian Optimization for a parameter tuning scenario. However, my case is slightly different since the parameter bounds have dependencies between other params. As an example, let's say I have two parameters x1 and x2, and I'm trying to optimize y. Let's say x1 is bounded in (10, 200) range. And x2 is always bounded between x1 and 200. Is there any way that I can specify the bound of x2 like this in skopt?
mkhan037
@mkhan037
what is the scale of the 'xi' and 'kappa' parameters? For Example, 'xi' is supposed to be expressing how much improvement one wants over the previous best values. Is this improvement expressed in an absolute amount or scaled or as a percentage?