Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    P4tr1ck99
    @P4tr1ck99
    Hi everyone, I'm a Rookie at optuna and I would like to know how I can change the evaluation metrics for finding the most fitting hyperparameters. If I understood it corectly, the metric for on which optuna decide if a hyperparameter set is a good one is the accurency. Instead of the accurency I would prefere to use the F1-Score or the Recall.
    How could that be implemented with optuna?
    2 replies
    Chris Fonnesbeck
    @fonnesbeck
    I'm running into an issue trying to optimize the hyperparameters for a TF Estimator model (specifically a DNNClassifier). When I set up and run an Optuna study it quickly uses up all of my session's resources and crashes (this is using either a high-memory GPU Colab session or an AWS Deep Learning AMI). I haven't had this problem using non-Estimator TF models, nor does it occur when I run my model outside of Optuna, so I'm wondering if there is something special that needs to be done with them.
    17 replies
    Dário Passos
    @dario-passos
    Hi! Is there a way of changing the color map that optuna.visualization.plot_contour uses by default? Thanks!
    3 replies
    Robin-des-Bois
    @Robin-des-Bois

    Hi :-)
    Is there predefined way to nest trial parameters?
    I would like to pass a trial object into a function and all the parameters that get added inside this function should be prefixed automatically by a string that I specify.

    I imagine the interface to look something like this but did not find something similar in the API:

    def configure_subsytem_a(trial: optuna.Trial) -> SubSystemA:
        n_params = trial.suggest_int("n_params", 1,3)
        return SubSystemA(n_params)
    
    trial = ...
    
    subsystem_a = configure_subsytem_a(trial.withPrefix('subsystem_a'))

    This should result in a conifg like this:

    {
        'subsystem_a.n_parmas': 3
    }

    It would be quite easy to build this functionality myself, by wrapping the trial object, but if functionality like this is provided, I would prefer to use that.

    1 reply
    MaximilianSamLickeAgdur
    @MaximilianSamLickeAgdur

    Hi,

    What is the preferred way of dealing with trial/suggestions that have ranges that depend on each other, see below callfunction...
    is the preferred way to do as below or is it better to set like a central value and penalize values outside of range?
    Does this method even work with optuna, sampler being used: TPE. Are certain samplers better at this?
    Tips on literature?

    class Objectiveoptim(object):

    def init(self, idmodelsdict, value):
    self.idmodelsdict = idmodelsdict
    self.value = value

    def call(self, trial):
    totalfactor = 0
    totalvalueused = 0
    valuedict = dict()

    for id_, model in self.idmodelsdict.items():
        model.eval()
    
        valuedict[id_] = trial.suggest_float(id_, 0, self.value - totalvalueused)
        totalvalueused += valuedict[id_]
        totalfactor += model(torch.tensor([valuedict[id_]], dtype=torch.float32))
    
    return totalfactor
    1 reply
    Francesco Carli
    @mr-fcharles_gitlab

    Hi,

    I'm having difficulties in understanding how i can use the command

    optuna.visualization.plot_param_importances(study)

    to visualize hyperparameters importance while performing multiple objectives optimization. I undestand that I should specify the metric wrt which I want the importances to be computed but I don't understand how to do so.

    Thanks in advance!

    7 replies
    Dário Passos
    @dario-passos
    Hey everyone. I've been using Optuna-dashboard for a couple of weeks now and I'm detecting a weird behaviour. I'm using Optuna 2.6 to optimize the hyperparameters of a relatively small (5 to 8 layers) tensorflow/keras convolution neural network in a Jupyter notebook and optuna-dashboard 0.3.1 (SQLAlchemy 1.3.22) to monitor the evolution of the optimization. My default browser is Chrome (version 89.0.4389.82) and my OS is Windows 10. The strange behaviour that I've started noticing is a very high RAM consumption by optuna-dashboard after a certain number of trial in my optimization studies, much larger than the database file being created by Optuna. For example I have a study.db file that has roughly 11Mb corresponding to 1563 trials points. Displaying this on the browser gobbles up around 4 Gb of RAM and this is if shutdown optuna-dashboard and reload the study.db from scratch. When I continuously monitor the optimization experiment from the beginning, optuna-dashboard reaches around 6 Gb of RAM (for the exactly same file). Around trial 500 (more or less), the browser starts to get unresponsive or very low in terms of selection buttons, etc. This difficults the results analysis and is quite annoying... I report this same behaviour in two different PCs with different graphic cards and memory configurations. My CPUs and GPUs are running always below 60% performance, so lack of resources do not seem to be the cause. Is this suppose to happen? What can I do make the process run faster?
    1 reply
    yywangvr
    @yywangvr
    import optuna
    
    def objective(trial):
        x = trial.suggest_float("x", 0, 5)
        y = trial.suggest_float("y", 0, 3)
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
        v=v0+v1
        return v
    Hello!
    Is it possible to log the value of v0 and v1 in some functions such asstudy.trials_dataframe. As I also wish to analyse the intermediate values from the objective function.
    Miguel Crispim Romao
    @romanovzky
    Hi all. Quick question, when using optuna.multi_objective.samplers.NSGAIIMultiObjectiveSampler, which has by default 50 trials per generation, if I set the study to perform 1000 trials do I assume correctly that there will be 20 generations?
    3 replies
    yywangvr
    @yywangvr
    import optuna
    
    def objective(trial):
        x = trial.suggest_float("x", 0, 5)
        y = trial.suggest_float("y", 0, 3)
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
        v=v0+v1
        return v

    Hello!
    Is it possible to log the value of v0 and v1 in some functions such asstudy.trials_dataframe. As I also wish to analyse the intermediate values from the objective function.

    Solution is given by optuna author here: https://github.com/optuna/optuna/issues/2520#issuecomment-806578752

    Chris Fonnesbeck
    @fonnesbeck
    Optuna is filling up my hard drive with what I assume are some sort of swap/temp files when running a study. My nearly empty (before running Optuna) 1TB drive is now almost full after 30 trials. Where are these files, and why doesn't Optuna get rid of them?
    3 replies
    Dário Passos
    @dario-passos
    In the Optuna documentation, there is a note in the "Key Features" section called "Which Sampler and Pruner Should be Used?" and that points to a possible very relevant document " Ozaki et al, Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020" that describes the performance of samplers/pruners pairs for deep learning tasks. Unfortunately this document is in Japanese! Is anyone aware of a translation of this document to English or of some blog or webpage where its content is described in English? Since deep learning tasks are a very contemporary subject, it would be very useful to have this benchmark (or a summary of it) in English so researchers from different places can tap into that information.
    1 reply
    James Y
    @yuanjames
    Hi, does any know whether the CMA-ES sampler will use n initial independent samplings to generate initial parameters for CMA-ES?
    4 replies
    Hideaki Imamura
    @HideakiImamura
    Thanks to your contributions, we’ve just released v2.7.0. Check out the highlights and release notes at https://github.com/optuna/optuna/releases/tag/v2.7.0 or with the Tweet https://twitter.com/OptunaAutoML/status/1378915791614058502?s=20. Highlights:
    Optuna dashboard now has its own repository. Install with pip install optuna-dashboard and try it out with optuna-dashboard $STORAGE_URL. The dashboard subcommand is now deprecated!
    Deprecate n_jobs in Study.optimize keeping process-level parallelization. There should be one obvious way to do distributed optimization.
    Lots of new tutorials and examples!
    2403hwaseer
    @2403hwaseer
    Hey @c-bata , I wanted to ask where can I submit my GSoC proposal for review?
    2 replies
    Izabela Paulino
    @izabfee_gitlab
    Hi everyone! I would like to ask if there is any other documentation where I can find the explanation of each parameter of LightGBMTunerCV?
    2 replies
    I'm in doubt specifically regarding the difference of folds and nfold of LightGBMTunerCV.
    Miguel Crispim Romao
    @romanovzky
    Hi all. In a given optimisation problem, I'm not only interested in the solutions, but getting as many non-duplicate solutions for a certain problem. So far, I have been using NSGAIISampler with multiple objectives, but I would like to encourage the evolution (or sample, if I use TPE instead) to "explore" more. Any advice? Thanks
    3 replies
    Hideaki Imamura
    @HideakiImamura
    Hi everyone! PR reviews and issue responses will be slow due to some maintainers taking a vacation from 4/24-5/6. Sorry for the inconvenience.
    Michael Schlitzer
    @michaelschlitzer
    I have a very dumb question about Optuna. When I run a study and get study.best_trial.value and my objective function is returning the accuracy of the test / validation dataset, what is that best_trial.value? Is that the accuracy on the test / validation set, or is it the accuracy on the training set? Or is it some other number all together? I just can't seem to find or decipher the answer. Thank you.
    2 replies
    dumbjarvis
    @dumbjarvis
    Does optuna work on graphs? Like PyTorch geometric?
    To be specific, tune params for community detection on graphs.
    1 reply
    dlin1
    @kdlin
    The following command, optuna.visualization.plot_optimization_history(study), works on Jupyter notebook but not on Jupyer lab. What should I do to make it work on Jupyter lab? Thank you.
    3 replies
    MaximilianSamLickeAgdur
    @MaximilianSamLickeAgdur
    This message was deleted
    3 replies
    esarvestani
    @esarvestani
    Hi everyone. I want to use optuna for a simple multi-objective optimization problem. It works well, the only problem is that the process of retrieving of best trials takes too much time, even much more than the optimization itself. It's not clear for me why this simple python assignment should take so much time. This is a piece of code to do that, in which I have a function in two dimensions (x, y) and I want to find the value of y that minimizes the function for a given x.
    def multi_objective(trial, X):
        y = trial.suggest_float("y", -5., 5.)
        result = []
        for x in X:
            result.append((x**2+y-11)**2 + (x+y**2-7)**2)
    
        return result
    
    optuna.logging.set_verbosity(optuna.logging.WARNING)
    Y = np.linspace(start=-5., stop=5., num=500)
    study = optuna.create_study(directions=['minimize']*len(Y))
    study.optimize(lambda trial: multi_objective(trial, Y), n_trials=1000)
    
    # the next line takes too much time to run
    bests = study.best_trials
    
    best_values = [trial.values for trial in bests]
    best_params = [trial.params for trial in bests]
    2 replies
    Pedro Vítor
    @pvcastro_twitter
    Hi there! I have some hyperparameters that are dependent on others. For example, I'm testing two different learning rate schedulers in AllenNLP: slanted_triangular and linear_with_warmup, and they each have their own hyperparameters. How can I tie these related hyperparameters to values that are related to each other? Also, is there any way to improve the sampler configuration to take these dependencies into consideration, to prevent combining and evaluating unnecessary scenarios?
    6 replies
    MaximilianSamLickeAgdur
    @MaximilianSamLickeAgdur
    Like above i have hyperparameters that are dependent on each other. Especially the last hyperparameter in the loop is dependent on all other hyperparameters, will the below code example be a problem? chosen sampler tpe(multivariate)
    class Randomoptim(object):
    def init(self, modelsdict):
    self.modelsdict = modelsdict
    self.value = 1.0
    def __call__(self, trial):
        totalfactor = 0
        valuedict = dict()
    
        for id_, model in self.modelsdict.items():
            model.eval()
            if id_ == list(self.modelsdict.keys())[-1]:
                valuedict[id_] = trial.suggest_float(id_,
                                                     self.value,
                                                     self.value)
            else:
                valuedict[id_] = trial.suggest_float(id_,
                                                     0,
                                                     self.value)
            self.value -= valuedict[id_]
    
            totalfactor += model(valuedict[id_])
        return totalfactor
    3 replies
    Dmitry
    @Akkarine
    Hello! Is there any way to prune study from failed/staled in RUNNING state trials?
    Only manually in database?
    Dmitry
    @Akkarine
    image.png
    2 replies
    Well, discovered, that you shouldn't do it, because of corruption for sampler history)
    Pedro Vítor
    @pvcastro_twitter
    I'm running a study with AllenNLP optuna testing 9 hyperparameters, with 8 categoricals and a float from 1 to 5 (with step 1). This would yield me around 576k different combinations. I'm around trial 26 now, and for some reason, trials from 21 to 25 were all using the exact same hyperparameters, which were "copied" from trial 11, which is the best one. Whis is this happening? Why isn't optuna trying other combinations? I'm using the default TPESampler (also default parameters) and SuccessiveHalvingPruner with min_resource 5 (other parameters are also default). I'm also running two processes, one for each GPU, but both sharing the same optuna database.
    5 replies
    Pedro Vítor
    @pvcastro_twitter
    If i'm using a particular metric for a study, how do I setup the study so trials get pruned unless they reach a minimum x score after y epochs?
    6 replies
    TKouras
    @kouras_t_twitter
    Hello, Im trying to do some regressions with optuna and after the 50 trials, i want to keep the best r2_score and a scatter plot of "actual vs predicted values". If I do the process inside the objective trial it will return 50 scatter plots, but i only want the best one. Is there any way you can do this?
    4 replies
    Alexander_Konstantinidis
    @AlexanderKonstantinidis

    Hi, I am running the following code and I get an error message, could you please help?
    import optuna.integration.lightgbm as lgbm
    best_params, tuning_history = dict(), list()
    booster = lgbm.train(params, dtrain, valid_sets=dval,
    verbose_eval=0,
    best_params=best_params,
    tuning_history=tuning_history)

    The error message is:
    TypeError Traceback (most recent call last)

    <ipython-input-28-c0d324367a3c> in <module>
    4 verbose_eval=0,
    5 best_params=best_params,
    ----> 6 tuning_history=tuning_history)
    7

    ~\Anaconda3\envs\tf2\lib\site-packages\optuna\integration_lightgbm_tunerinit.py in train(args, kwargs)
    32 _imports.check()
    33
    ---> 34 auto_booster = LightGBMTuner(args,
    kwargs)
    35 auto_booster.run()
    36 return auto_booster.get_best_booster()

    TypeError: init() got an unexpected keyword argument 'best_params'
    Thank you.

    1 reply
    Hiroyuki Vincent Yamazaki
    @hvy

    As always, thanks for all the feedback and contributions. We’ve just released v2.8.0 with several interesting features and improvements.

    🙊Constant Liar (CL) for TPE improves distributed search
    🌳Tree-structured search space support for multivariate TPE
    🪞Copying Studies across storages
    📞Callbacks to re-run a pre-empted trial

    Check out the highlights and release notes at https://github.com/optuna/optuna/releases/tag/v2.8.0 or with the Tweet https://twitter.com/OptunaAutoML/status/1401799603154939908.

    Patshin_Anton
    @paantya

    Hi All!

    Can you please tell me if it is possible to run optuna.pruners together with n_jobs = 10 and how best to do it to get acceleration?

    5 replies
    Miguel Crispim Romao
    @romanovzky
    Hi all. I have an objective function that has a lot of invalid regions, which I inform the study by returning a nan. I notice that the TPE spends a lot of time/trials around invalid regions after a while. Is this by construction due to the "uncertainty driven" GP logic? Is there a way of preventing the TPE to spend so long trying points with high uncertainty due to previous nan in the vicinity? Cheers
    5 replies
    adriannaziel
    @adriannaziel
    Hello,
    I have a question about hyperparameter importance. When I use optuna.importance.get_param_importances(study) and optuna.visualization.plot_param_importances(study) , both on the same single-objective study and with default parameters, the results differ. Why it is like that? or maybe I am doing sht wrong? I tried to look for an explanantion in optuna doc but haven't found anything.
    1 reply
    Krishna Bhogaonker
    @00krishna
    Hello there optuners, this is my first time using optuna and it works well except for one small thing. I am not exactly sure how to explain this. So I use Weights and Biases website (wandb.com) to track all of the model runs--the logs. And I am using pytorch-lightning to run my model. Now I am successfully able to create the objective() function and run the training, but it seems that the log on Weights and Biases is getting overwritten. So instead of different runs or experiments for each trial, I just see one really funky trial with training error and losses everywhere--because everything is mixed together.
    Krishna Bhogaonker
    @00krishna
    Here is the code if it helps. I am not sure if there is a way to tell optuna to initiate a new experiment each time the trial runs.
    def objective(trial: optuna.trial.Trial) -> float:
    
        input_window_size = trial.suggest_categorical("input_window_size", [20, 30, 40])
        output_window_size = trial.suggest_categorical("output_window_size", [ 20, 30, 40])
        test_percentage = 0.20
        val_percentage = 0.20
        lags = [1, 2, 3, 365, 366, 367]
        lag_combos = list(powerset(lags))
        laglist = trial.suggest_categorical("lags", lag_combos)
        drop_columns = ['pr', 'tmmx']
        drop_column_combos = list(powerset(drop_columns))
        remove_columns = trial.suggest_categorical("remove_columns", drop_column_combos)
        batch_size = trial.suggest_categorical("batch_size",[16, 32, 64, 128])
    
        datamodule = get_dataset('Sensor1') 
        datamodule = datamodule(input_window_size=input_window_size,
                                output_window_size=output_window_size,
                                test_percentage=test_percentage,
                                val_percentage=val_percentage,
                                laglist=laglist,
                                remove_columns=remove_columns,
                                batch_size=batch_size)
    
        datamodule.prepare_data()
        datamodule.setup()
    
    
        hidden_dim = trial.suggest_categorical("hidden_dim", [16, 32, 64, 128])
        num_layers = trial.suggest_int("num_layers", 1, 3)
        dropout = trial.suggest_float("dropout", 0.2, 0.5, step=0.1)
    
        optimizer_name = trial.suggest_categorical("optimizer", ["Adam", "RMSprop", "SGD"])
        lr = trial.suggest_uniform("lr", 1e-5, 1e-1)
    
        model = LitDivSensor(num_features = datamodule.num_features,
                                hidden_dim = hidden_dim,
                                num_layers = num_layers,
                                dropout = dropout,
                                debug = False,
                                learning_rate = lr,
                                batch_size = batch_size,
                                optimizer_name=optimizer_name)
    
        tb_logger = pl_loggers.TensorBoardLogger('logs/', name='division-bell-rnn')
        wandb_logger = pl_loggers.WandbLogger(name='division-bell-rnn'+ str(date_time), 
                                              save_dir='logs/', 
                                              project='division-bell', 
                                              entity='****',
                                              offline=False)
        trainer = pl.Trainer(
            logger=[tb_logger, wandb_logger],
            checkpoint_callback=False,
            max_epochs=EPOCHS,
            gpus=1 if torch.cuda.is_available() else None,
            callbacks=[PyTorchLightningPruningCallback(trial, monitor="val_loss")])
    
        hyperparameters = dict(input_window_size=input_window_size, 
                               output_window_size=output_window_size,
                               laglist=laglist,
                               remove_columns=remove_columns,
                               batch_size = batch_size,
                               hidden_dim=hidden_dim,
                               num_layers=num_layers,
                               dropout=dropout,
                               optimizer_name=optimizer_name,
                               learning_rate=lr)
    
        trainer.logger.log_hyperparams(hyperparameters)
        trainer.fit(model, datamodule=datamodule)
    
        return trainer.callback_metrics["val_loss"].item()
    1 reply
    Philipp Dörfler
    @phdoerfler
    'sup! What RDBs does optuna support? The docs mention sqlite, mysql and postgresql but I can't find a complete list. I assume it uses some different library for the DB access which would have said list, right? Which one? Thanks!
    1 reply
    alternatively: What exactly is the issue with sqlite and running optuna in parallel? I'm planning on running at most 4 instances of optuna in parallel (the machine has 4 GPUs). Is accessing sqlite simultaneously merely not as efficient or will it introduce inconsistencies? I'd be completely OK with, e.g., locking happening, that's still plenty fast for my use case obviously. I'm also on a system that isn't maintained by myself, so I can't easily install postgresql or some other RDB.
    For that reason I was also looking into portable versions of Postgresql that would be self contained in a single directory but only found something for Windows (and this is a ubuntu machine I'm talking about)
    Philipp Dörfler
    @phdoerfler
    fwiw I just realised that in https://optuna.readthedocs.io/en/stable/faq.html#how-can-i-use-two-gpus-for-evaluating-two-trials-simultaneously the code suggests that it is OK to use sqlite from multiple instances (on the same machine). Is this viable or just an oversimplification for the sake of easy to read documentation?
    1 reply
    Renato Hermoza Aragonés
    @renato145
    Im using postgres while training in kubernetes and slurm cluster
    I have one question, I have some clusters with limited time for execution, how do I resume a trial ?
    2 replies
    Shalini
    @tomshalini
    Hello everyone,
    I want to plot history of trials using "plot_optimization_history". It is not working with multi-objectives. Is there something I can try to visualize multi objective sampler results?
    2 replies
    Evgeny Frolov
    @evfro

    Hi everyone. I have a three-level question about the best practices for skipping trials.

    1. Is it possible to prohibit sampling of certain combinations of values from hyper-parameter distributions? I have a complex search space and there's an interplay between sampled hyper-parameter values. As an example, after params a, b, c are sampled the trial is allowed to run only if, e.g., a * b > c.

    2. What is the most appropriate way to skip a trial before performing main calculations in an objective (e.g., defining model, running training, etc.)?
      Currently, once I made all the necessary trial.suggest_* calls I also call my custom is_valid(trial.params) function, which tells me if the trial is allowed to run or not. Naïve implementation is to just call is_valid function within the objective body. But I would like to detach the objective from verification of trial parameters. I compose many models with different objectives and with different restrictions of the search space, so I'd like to avoid adding is_valid(trial.params) call to all objectives and would prefer a cleaner solution, something like a pre-trial check, so that I only define it once and register it for any objective whatever it is. Does optuna has such functionality? Or do I have to go with closures/decorators/custom objective classes?

    3. After the way to skip a trial is found, what would be the correct approach for actually skipping it in the study? I'm specifically interested in the case of TPESampler, which uses history of trials for its next steps. There seems to be at least 2 ways to do so: either raise optuna.TrialPruned() if trial should be skipped, or generate a custom exception like raise ConfigError and add it to the catch list of study.optimize call. Which one is more appropriate in terms of ensuring the correct work of TPESampler?

    Thanks!

    8 replies
    Khawaja Umair Ul Hassan
    @umairkhawaja
    I modified the pytorch simple code example given in this repo and replaced the training/validation loop with a code similar to the sample training loop given in a classifier training example in the pytorch docs. However, the study ends after a single trial. I'm assuming this has something to do with the return accuracy statement at the end of the objective function. Can someone explain the purpose of this statement, or better if someone has any idea what I could be doing wrong?
    1 reply