Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    braham-snyder
    @braham-snyder
    I should clarify my objective is CPU-bound and GIL-locked -- n_jobs=-1 w/o distributed storage uses only 3/16 cores
    Crystal Humphries
    @CrystalHumphries
    is there a way to tweak optuna so that one can vary the set of values tested during each acquisition function? That is, instead of testing one new set of parameters at a time, i would prefer to test >=3 at once?
    3 replies
    Hamza Ali
    @ryzbaka
    Hi everyone, my name's Hamza (https://github.com/ryzbaka). I'm experienced in full-stack web development, data engineering, and computational statistics. I'm interested in working on the Optuna web dashboard for GSOC 2021.
    3 replies
    I'd like to know more about the process of getting started with the Optuna codebase. Based on the document here, should I get started with figuring out how to port the dashboard to TS or is there something else that I'm missing?
    Luca Ponzoni
    @luponzo86
    Hi, this may be a stupid question. I'm trying to use multi-objective optimization in Optuna 2.5.0 and I'm not sure how to turn off pruning to avoid the error: NotImplementedError("Trial.report is not supported for multi-objective optimization.")
    7 replies
    Dmitriy Selivanov
    @dselivanov
    Hi Folks. I've tried to google, but could not find solution. I have a proxy loss function which is loss = weight_1 * loss_component_1 + weight_2 * loss_component_2 + ... . And constraint sum(weight_i) = 1; all(weight_i) > 0 & all(weight_i) < 1. I want to find optimal combination of weight_i. So essentially I need to sample parameters from multinoulli distribution. Of course I can sample parameters from uniform and then normalize them, but I don't feel this is the right way.
    7 replies
    esarvestani
    @esarvestani

    Hi everyone. I am going to use Optuna for hyperparameter optimization of an iterative process in which the number of samples increases by iterations. I start Optuna from scratch for iteration 0, but for the next iterations I use accumulated trials from all previous iterations. With this warm-up scheme after some iterations the search space becomes so small and it concentrates on a very small region in the parameter space. Now, I need to give it the chance to look into other regions in the parameter space after a few iterations. One idea that I have is to force it to forget the trials from long time ago, for example when it starts iteration 5 I want to ignore the trials from iteration 0 and 1 and so on. To do so I use this piece of code to manually change the state of those trials from 'COMPLETE' to 'FAIL'; with this when the 'study' is loaded only the trials with state='COMPLETE' are taken into account.

    def makefailSqliteTable(storage):
        try:
            sqliteConnection = sqlite3.connect(storage)
            cursor = sqliteConnection.cursor()
            sql_update_query = """Update trials set state = 'FAIL' """
            cursor.execute(sql_update_query)
            sqliteConnection.commit()
            cursor.close()
        except sqlite3.Error as error:
            print("Failed to update sqlite table", error)
        finally:
            if (sqliteConnection):
                sqliteConnection.close()
                print("The SQLite connection is closed")
    
    def updateSqliteTable(storage, N):
        try:
            sqliteConnection = sqlite3.connect(storage)
            cursor = sqliteConnection.cursor()
            df = pd.read_sql_query("SELECT * from trials", sqliteConnection)
            sql_update_query = """Update trials set state = 'COMPLETE' where number > """ + str(len(df)-N)
            cursor.execute(sql_update_query)
            sqliteConnection.commit()
            cursor.close()
        except sqlite3.Error as error:
            print("Failed to update sqlite table", error)
        finally:
            if (sqliteConnection):
                sqliteConnection.close()
                print("The SQLite connection is closed")

    I would like to know whether this procedure does the thing that I want. I mean, does it really forget the history from long time ago?

    2 replies
    Hiroyuki Vincent Yamazaki
    @hvy
    Thanks for all your contributions, we’ve just released v2.6.0. Please check out the highlights and release note at https://github.com/optuna/optuna/releases/tag/v2.6.0 or via the Tweet https://twitter.com/OptunaAutoML/status/1368818695250669570. In short, it contains
    • Warm starting CMA-ES and sep-CMA-ES support
    • PyTorch Distributed Data Parallel support
    • RDB storage and heartbeat improvement
    • Pre-defined search space with ask-and-tell interface
    SM-91
    @SM-91
    Hi everyone, I want to know if we can retrieve a study csv file from the study db? is it possible?
    2 replies
    emaldonadocruz
    @emaldonadocruz
    Howdy!, My name is Eduardo, and I am using optuna for an optimization problem. I am using a search space, and I would like to return more than one metric from the objective. When I try to do so, like in the example below, I get the following error message: "Trial 2 failed, because the number of the values 3 does not match the number of the objectives 1."
    def objective(trial):
    
        '''
        Instantiate model and evaluate
        '''
        Metric 1 = 
        Metric 2 = 
    
        '''
        Get additional metrics
        '''
        Metric 3
    
        return [Metric 1, Metric 2, Metric 3]
    
    d_space = np.linspace(0.05, 0.95, 2)
    l_space = np.linspace(0.0001, .0002 , 2)
    search_space = {"D": d_space, "L": l_space}
    
    study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))
    
    study.optimize(objective,
                   n_trials=d_space.shape[0] * l_space.shape[0],
                   show_progress_bar=True)
    3 replies
    P4tr1ck99
    @P4tr1ck99
    Hi everyone, I'm a Rookie at optuna and I would like to know how I can change the evaluation metrics for finding the most fitting hyperparameters. If I understood it corectly, the metric for on which optuna decide if a hyperparameter set is a good one is the accurency. Instead of the accurency I would prefere to use the F1-Score or the Recall.
    How could that be implemented with optuna?
    2 replies
    Chris Fonnesbeck
    @fonnesbeck
    I'm running into an issue trying to optimize the hyperparameters for a TF Estimator model (specifically a DNNClassifier). When I set up and run an Optuna study it quickly uses up all of my session's resources and crashes (this is using either a high-memory GPU Colab session or an AWS Deep Learning AMI). I haven't had this problem using non-Estimator TF models, nor does it occur when I run my model outside of Optuna, so I'm wondering if there is something special that needs to be done with them.
    17 replies
    Dário Passos
    @dario-passos
    Hi! Is there a way of changing the color map that optuna.visualization.plot_contour uses by default? Thanks!
    3 replies
    Robin-des-Bois
    @Robin-des-Bois

    Hi :-)
    Is there predefined way to nest trial parameters?
    I would like to pass a trial object into a function and all the parameters that get added inside this function should be prefixed automatically by a string that I specify.

    I imagine the interface to look something like this but did not find something similar in the API:

    def configure_subsytem_a(trial: optuna.Trial) -> SubSystemA:
        n_params = trial.suggest_int("n_params", 1,3)
        return SubSystemA(n_params)
    
    trial = ...
    
    subsystem_a = configure_subsytem_a(trial.withPrefix('subsystem_a'))

    This should result in a conifg like this:

    {
        'subsystem_a.n_parmas': 3
    }

    It would be quite easy to build this functionality myself, by wrapping the trial object, but if functionality like this is provided, I would prefer to use that.

    1 reply
    MaximilianSamLickeAgdur
    @MaximilianSamLickeAgdur

    Hi,

    What is the preferred way of dealing with trial/suggestions that have ranges that depend on each other, see below callfunction...
    is the preferred way to do as below or is it better to set like a central value and penalize values outside of range?
    Does this method even work with optuna, sampler being used: TPE. Are certain samplers better at this?
    Tips on literature?

    class Objectiveoptim(object):

    def init(self, idmodelsdict, value):
    self.idmodelsdict = idmodelsdict
    self.value = value

    def call(self, trial):
    totalfactor = 0
    totalvalueused = 0
    valuedict = dict()

    for id_, model in self.idmodelsdict.items():
        model.eval()
    
        valuedict[id_] = trial.suggest_float(id_, 0, self.value - totalvalueused)
        totalvalueused += valuedict[id_]
        totalfactor += model(torch.tensor([valuedict[id_]], dtype=torch.float32))
    
    return totalfactor
    1 reply
    Francesco Carli
    @mr-fcharles_gitlab

    Hi,

    I'm having difficulties in understanding how i can use the command

    optuna.visualization.plot_param_importances(study)

    to visualize hyperparameters importance while performing multiple objectives optimization. I undestand that I should specify the metric wrt which I want the importances to be computed but I don't understand how to do so.

    Thanks in advance!

    7 replies
    Dário Passos
    @dario-passos
    Hey everyone. I've been using Optuna-dashboard for a couple of weeks now and I'm detecting a weird behaviour. I'm using Optuna 2.6 to optimize the hyperparameters of a relatively small (5 to 8 layers) tensorflow/keras convolution neural network in a Jupyter notebook and optuna-dashboard 0.3.1 (SQLAlchemy 1.3.22) to monitor the evolution of the optimization. My default browser is Chrome (version 89.0.4389.82) and my OS is Windows 10. The strange behaviour that I've started noticing is a very high RAM consumption by optuna-dashboard after a certain number of trial in my optimization studies, much larger than the database file being created by Optuna. For example I have a study.db file that has roughly 11Mb corresponding to 1563 trials points. Displaying this on the browser gobbles up around 4 Gb of RAM and this is if shutdown optuna-dashboard and reload the study.db from scratch. When I continuously monitor the optimization experiment from the beginning, optuna-dashboard reaches around 6 Gb of RAM (for the exactly same file). Around trial 500 (more or less), the browser starts to get unresponsive or very low in terms of selection buttons, etc. This difficults the results analysis and is quite annoying... I report this same behaviour in two different PCs with different graphic cards and memory configurations. My CPUs and GPUs are running always below 60% performance, so lack of resources do not seem to be the cause. Is this suppose to happen? What can I do make the process run faster?
    1 reply
    yywangvr
    @yywangvr
    import optuna
    
    def objective(trial):
        x = trial.suggest_float("x", 0, 5)
        y = trial.suggest_float("y", 0, 3)
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
        v=v0+v1
        return v
    Hello!
    Is it possible to log the value of v0 and v1 in some functions such asstudy.trials_dataframe. As I also wish to analyse the intermediate values from the objective function.
    Miguel Crispim Romao
    @romanovzky
    Hi all. Quick question, when using optuna.multi_objective.samplers.NSGAIIMultiObjectiveSampler, which has by default 50 trials per generation, if I set the study to perform 1000 trials do I assume correctly that there will be 20 generations?
    3 replies
    yywangvr
    @yywangvr
    import optuna
    
    def objective(trial):
        x = trial.suggest_float("x", 0, 5)
        y = trial.suggest_float("y", 0, 3)
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
        v=v0+v1
        return v

    Hello!
    Is it possible to log the value of v0 and v1 in some functions such asstudy.trials_dataframe. As I also wish to analyse the intermediate values from the objective function.

    Solution is given by optuna author here: https://github.com/optuna/optuna/issues/2520#issuecomment-806578752

    Chris Fonnesbeck
    @fonnesbeck
    Optuna is filling up my hard drive with what I assume are some sort of swap/temp files when running a study. My nearly empty (before running Optuna) 1TB drive is now almost full after 30 trials. Where are these files, and why doesn't Optuna get rid of them?
    3 replies
    Dário Passos
    @dario-passos
    In the Optuna documentation, there is a note in the "Key Features" section called "Which Sampler and Pruner Should be Used?" and that points to a possible very relevant document " Ozaki et al, Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020" that describes the performance of samplers/pruners pairs for deep learning tasks. Unfortunately this document is in Japanese! Is anyone aware of a translation of this document to English or of some blog or webpage where its content is described in English? Since deep learning tasks are a very contemporary subject, it would be very useful to have this benchmark (or a summary of it) in English so researchers from different places can tap into that information.
    1 reply
    James Y
    @yuanjames
    Hi, does any know whether the CMA-ES sampler will use n initial independent samplings to generate initial parameters for CMA-ES?
    4 replies
    Hideaki Imamura
    @HideakiImamura
    Thanks to your contributions, we’ve just released v2.7.0. Check out the highlights and release notes at https://github.com/optuna/optuna/releases/tag/v2.7.0 or with the Tweet https://twitter.com/OptunaAutoML/status/1378915791614058502?s=20. Highlights:
    Optuna dashboard now has its own repository. Install with pip install optuna-dashboard and try it out with optuna-dashboard $STORAGE_URL. The dashboard subcommand is now deprecated!
    Deprecate n_jobs in Study.optimize keeping process-level parallelization. There should be one obvious way to do distributed optimization.
    Lots of new tutorials and examples!
    2403hwaseer
    @2403hwaseer
    Hey @c-bata , I wanted to ask where can I submit my GSoC proposal for review?
    2 replies
    Izabela Paulino
    @izabfee_gitlab
    Hi everyone! I would like to ask if there is any other documentation where I can find the explanation of each parameter of LightGBMTunerCV?
    2 replies
    I'm in doubt specifically regarding the difference of folds and nfold of LightGBMTunerCV.
    Miguel Crispim Romao
    @romanovzky
    Hi all. In a given optimisation problem, I'm not only interested in the solutions, but getting as many non-duplicate solutions for a certain problem. So far, I have been using NSGAIISampler with multiple objectives, but I would like to encourage the evolution (or sample, if I use TPE instead) to "explore" more. Any advice? Thanks
    3 replies
    Hideaki Imamura
    @HideakiImamura
    Hi everyone! PR reviews and issue responses will be slow due to some maintainers taking a vacation from 4/24-5/6. Sorry for the inconvenience.
    Michael Schlitzer
    @michaelschlitzer
    I have a very dumb question about Optuna. When I run a study and get study.best_trial.value and my objective function is returning the accuracy of the test / validation dataset, what is that best_trial.value? Is that the accuracy on the test / validation set, or is it the accuracy on the training set? Or is it some other number all together? I just can't seem to find or decipher the answer. Thank you.
    2 replies
    dumbjarvis
    @dumbjarvis
    Does optuna work on graphs? Like PyTorch geometric?
    To be specific, tune params for community detection on graphs.
    1 reply
    dlin1
    @kdlin
    The following command, optuna.visualization.plot_optimization_history(study), works on Jupyter notebook but not on Jupyer lab. What should I do to make it work on Jupyter lab? Thank you.
    3 replies
    MaximilianSamLickeAgdur
    @MaximilianSamLickeAgdur
    This message was deleted
    3 replies
    esarvestani
    @esarvestani
    Hi everyone. I want to use optuna for a simple multi-objective optimization problem. It works well, the only problem is that the process of retrieving of best trials takes too much time, even much more than the optimization itself. It's not clear for me why this simple python assignment should take so much time. This is a piece of code to do that, in which I have a function in two dimensions (x, y) and I want to find the value of y that minimizes the function for a given x.
    def multi_objective(trial, X):
        y = trial.suggest_float("y", -5., 5.)
        result = []
        for x in X:
            result.append((x**2+y-11)**2 + (x+y**2-7)**2)
    
        return result
    
    optuna.logging.set_verbosity(optuna.logging.WARNING)
    Y = np.linspace(start=-5., stop=5., num=500)
    study = optuna.create_study(directions=['minimize']*len(Y))
    study.optimize(lambda trial: multi_objective(trial, Y), n_trials=1000)
    
    # the next line takes too much time to run
    bests = study.best_trials
    
    best_values = [trial.values for trial in bests]
    best_params = [trial.params for trial in bests]
    2 replies
    Pedro Vítor
    @pvcastro_twitter
    Hi there! I have some hyperparameters that are dependent on others. For example, I'm testing two different learning rate schedulers in AllenNLP: slanted_triangular and linear_with_warmup, and they each have their own hyperparameters. How can I tie these related hyperparameters to values that are related to each other? Also, is there any way to improve the sampler configuration to take these dependencies into consideration, to prevent combining and evaluating unnecessary scenarios?
    6 replies
    MaximilianSamLickeAgdur
    @MaximilianSamLickeAgdur
    Like above i have hyperparameters that are dependent on each other. Especially the last hyperparameter in the loop is dependent on all other hyperparameters, will the below code example be a problem? chosen sampler tpe(multivariate)
    class Randomoptim(object):
    def init(self, modelsdict):
    self.modelsdict = modelsdict
    self.value = 1.0
    def __call__(self, trial):
        totalfactor = 0
        valuedict = dict()
    
        for id_, model in self.modelsdict.items():
            model.eval()
            if id_ == list(self.modelsdict.keys())[-1]:
                valuedict[id_] = trial.suggest_float(id_,
                                                     self.value,
                                                     self.value)
            else:
                valuedict[id_] = trial.suggest_float(id_,
                                                     0,
                                                     self.value)
            self.value -= valuedict[id_]
    
            totalfactor += model(valuedict[id_])
        return totalfactor
    3 replies
    Dmitry
    @Akkarine
    Hello! Is there any way to prune study from failed/staled in RUNNING state trials?
    Only manually in database?
    Dmitry
    @Akkarine
    image.png
    2 replies
    Well, discovered, that you shouldn't do it, because of corruption for sampler history)
    Pedro Vítor
    @pvcastro_twitter
    I'm running a study with AllenNLP optuna testing 9 hyperparameters, with 8 categoricals and a float from 1 to 5 (with step 1). This would yield me around 576k different combinations. I'm around trial 26 now, and for some reason, trials from 21 to 25 were all using the exact same hyperparameters, which were "copied" from trial 11, which is the best one. Whis is this happening? Why isn't optuna trying other combinations? I'm using the default TPESampler (also default parameters) and SuccessiveHalvingPruner with min_resource 5 (other parameters are also default). I'm also running two processes, one for each GPU, but both sharing the same optuna database.
    5 replies
    Pedro Vítor
    @pvcastro_twitter
    If i'm using a particular metric for a study, how do I setup the study so trials get pruned unless they reach a minimum x score after y epochs?
    6 replies
    TKouras
    @kouras_t_twitter
    Hello, Im trying to do some regressions with optuna and after the 50 trials, i want to keep the best r2_score and a scatter plot of "actual vs predicted values". If I do the process inside the objective trial it will return 50 scatter plots, but i only want the best one. Is there any way you can do this?
    4 replies
    Alexander_Konstantinidis
    @AlexanderKonstantinidis

    Hi, I am running the following code and I get an error message, could you please help?
    import optuna.integration.lightgbm as lgbm
    best_params, tuning_history = dict(), list()
    booster = lgbm.train(params, dtrain, valid_sets=dval,
    verbose_eval=0,
    best_params=best_params,
    tuning_history=tuning_history)

    The error message is:
    TypeError Traceback (most recent call last)

    <ipython-input-28-c0d324367a3c> in <module>
    4 verbose_eval=0,
    5 best_params=best_params,
    ----> 6 tuning_history=tuning_history)
    7

    ~\Anaconda3\envs\tf2\lib\site-packages\optuna\integration_lightgbm_tunerinit.py in train(args, kwargs)
    32 _imports.check()
    33
    ---> 34 auto_booster = LightGBMTuner(args,
    kwargs)
    35 auto_booster.run()
    36 return auto_booster.get_best_booster()

    TypeError: init() got an unexpected keyword argument 'best_params'
    Thank you.

    1 reply
    Hiroyuki Vincent Yamazaki
    @hvy

    As always, thanks for all the feedback and contributions. We’ve just released v2.8.0 with several interesting features and improvements.

    🙊Constant Liar (CL) for TPE improves distributed search
    🌳Tree-structured search space support for multivariate TPE
    🪞Copying Studies across storages
    📞Callbacks to re-run a pre-empted trial

    Check out the highlights and release notes at https://github.com/optuna/optuna/releases/tag/v2.8.0 or with the Tweet https://twitter.com/OptunaAutoML/status/1401799603154939908.

    Patshin_Anton
    @paantya

    Hi All!

    Can you please tell me if it is possible to run optuna.pruners together with n_jobs = 10 and how best to do it to get acceleration?

    5 replies
    Miguel Crispim Romao
    @romanovzky
    Hi all. I have an objective function that has a lot of invalid regions, which I inform the study by returning a nan. I notice that the TPE spends a lot of time/trials around invalid regions after a while. Is this by construction due to the "uncertainty driven" GP logic? Is there a way of preventing the TPE to spend so long trying points with high uncertainty due to previous nan in the vicinity? Cheers
    5 replies
    adriannaziel
    @adriannaziel
    Hello,
    I have a question about hyperparameter importance. When I use optuna.importance.get_param_importances(study) and optuna.visualization.plot_param_importances(study) , both on the same single-objective study and with default parameters, the results differ. Why it is like that? or maybe I am doing sht wrong? I tried to look for an explanantion in optuna doc but haven't found anything.
    1 reply
    Krishna Bhogaonker
    @00krishna
    Hello there optuners, this is my first time using optuna and it works well except for one small thing. I am not exactly sure how to explain this. So I use Weights and Biases website (wandb.com) to track all of the model runs--the logs. And I am using pytorch-lightning to run my model. Now I am successfully able to create the objective() function and run the training, but it seems that the log on Weights and Biases is getting overwritten. So instead of different runs or experiments for each trial, I just see one really funky trial with training error and losses everywhere--because everything is mixed together.