Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Sandeep Thapa
    @Pager07
    New to optuna, can anyone please guide me on suggesting int that goes in the power of 2? So like [1,2,4,8]. Many Thanks
    3 replies
    James Y
    @yuanjames
    Does any meet the problem that the call of plot_optimization_history() method does not show the figure up.
    6 replies
    James Y
    @yuanjames
    May I ask is there any CmaEs sampler usage?
    James Y
    @yuanjames

    May I ask is there any CmaEs sampler usage?

    I am wondering CmaES is only used for sampling relative parameters, while random sampler is used to sample independent parameters. Is the relative search space is determined by backend if we follow the demo codes?

    import optuna
    
    
    def objective(trial):
        x = trial.suggest_uniform("x", -1, 1)
        y = trial.suggest_int("y", -1, 1)
        return x ** 2 + y
    
    
    sampler = optuna.samplers.CmaEsSampler()
    study = optuna.create_study(sampler=sampler)
    study.optimize(objective, n_trials=20)
    9 replies
    Manishankar Singh
    @tonysinghmss
    Hi team.
    I am wondering how you implemented conditions and loops into search space. Can you please explain or point towards the right direction ?
    3 replies
    SurajitTest
    @SurajitTest

    Hi Team, I am using a dataset of about 140,000 rows and 300 features (after categorical encoding). I am also using optuna integration for xgboost(using xgb.cv().Firstly, I tried with xgboost 1.3.3 and optuna 2.4.0 - program ran for 18 hours using 32 CPUs and not even a single trial completed. I then ran using xgboost 1.3.1 and optuna 2.4.0 -- program ran for 3 hours using 60 CPUs and not even a single trial completed. I am now trying with xgboost 1.2.1 and optuna 2.3.0 using 60 CPUs - Can anyone help to understand in case of any compatibility issues ?

    The code that I am using is given below:

    def objective(trial): 
        # Define the search space
        param_sp = {
            'base_score'            : 0.5, 
            'booster'               : 'gbtree', 
            'colsample_bytree'      : trial.suggest_categorical('colsample_bytree', [0.7,0.8,0.9,1.0]),
            'learning_rate'         : trial.suggest_categorical('learning_rate',[0.1]),
            'max_depth'             : trial.suggest_categorical('max_depth', [6,8,10]),
            'objective'             : 'binary:logistic', 
            'scale_pos_weight'      : trial.suggest_categorical('scale_pos_weight', [ratio1,ratio2,ratio3,ratio4,1,10,30,50,75,99,100]),        
            'subsample'             : trial.suggest_categorical('subsample', [0.5,0.6,0.7,0.8,0.9,1.0]),        
            'verbosity'             : 1, 
            'tree_method'           :'auto',            
            'predictor'             :'cpu_predictor', 
            'eval_metric'           :'aucpr'
        }
    
        #Add the pruning Call Back
        pruning_callback = optuna.integration.XGBoostPruningCallback(trial, "test-aucpr")
    
        #Perform Native API cross validation
        xgb_cv_results=xgb.cv(param_sp,dtrain,stratified=True,folds=skfolds,metrics='aucpr',num_boost_round=500,early_stopping_rounds=50,as_pandas=True,verbose_eval=False,seed=42,shuffle=True,callbacks=[pruning_callback])    
    
        # Set n_estimators as a trial attribute
        trial.set_user_attr("n_estimators", len(xgb_cv_results))
    
        # Extract the best score.
        best_score = xgb_cv_results["test-aucpr-mean"].values[-1]
        return best_score    
    
    pruner = optuna.pruners.MedianPruner(n_startup_trials=5, n_warmup_steps=20, interval_steps=10)
    study = optuna.create_study(study_name='XGB_Optuna_0.1_Iter1',direction='maximize',sampler=TPESampler(consider_magic_clip=True,seed=42,multivariate=False),pruner=pruner)
    
    # perform the search
    print('\nPerforming Bayesian Hyper Parameter Optimization..')
    study.optimize(objective, n_trials=100,n_jobs=-1)
    7 replies
    James Y
    @yuanjames
    Hi, May I ask a question about TPE, in the original paper, it said the hyperparameter search space is structured by the tree. I see in optuna.TPE, it uses independent samping of GMM for each hyperparameter. Therefore, it is not completely the same as the original right?
    4 replies
    Hiroyuki Vincent Yamazaki
    @hvy
    Thanks for all your contributions, we’ve just released v2.5.0. This is a minor but still a big release. Please check out the highlights and release note at https://github.com/optuna/optuna/releases/tag/v2.5.0 or via the Tweet https://twitter.com/OptunaAutoML/status/1356131644466372610. In short, it contains
    Ask-and-Tell interface to construct trials without objective function callbacks.
    Heartbeat monitoring of trials to automatically fail stale trials.
    Constrained optimization support for NSGA-II, which is a well-known evolutionary algorithm to solve multi-objective optimization.
    Jyot Makadiya
    @jeromepatel
    Hello everyone,
    I am new here, a 3rd year CS undergrad Student. I am thinking about participating in GSOC 2021, I have always been a big fan of Optuna, it has helped me a lot in Kaggle competitions, Thank you for creating such a great framework!! Then I found out that optuna will be there in gsoc 2021, I am so much excited to work and contribute in this great community!! Some guidance will be helpful, such as where to start or how to begin contributing(like some good starting issues)
    Madhu Charan
    @madhucharan
    Hello everyone. My name is Madhu. I am an undergraduate student from India. I recently started to contribute to opensource and found optuna. I would like someone to guide me through some beginner issues as well as some resources to get started with contributing the codebase. Thank you
    Crissman Loomis
    @Crissman
    @jeromepatel @madhucharan Welcome both of you. Please start with the Contribution Welcome issues! https://github.com/optuna/optuna/issues?q=is%3Aopen+is%3Aissue+label%3Acontribution-welcome
    Jyot Makadiya
    @jeromepatel
    Thank you for your quick reply, I will start with some welcome issues then!
    Madhu Charan
    @madhucharan
    Thank you @Crissman :) will start working on it
    SurajitTest
    @SurajitTest

    Hi Team, I am using XGBoost 1.3.3 and Optuna 2.4.0. My dataset has 138k rows and 300 columns (after categorical encoding). I am trying to replicate the example as in - https://github.com/optuna/optuna/blob/master/examples/pruning/xgboost_integration.py (but only for booster='gbtree'). When I run the code , I get the message 'segmentation fault' and the program returns to the $prompt (I am using amazon linux). Can anyone please help to understand as to why am I getting the message 'segmentation fault' ?

    The code that I am using is as given below:

    # Import data into xgb.DMatrix form 
    dtrain = xgb.DMatrix(X_train,label=y_train)
    dtest = xgb.DMatrix(X_test,label=y_test)
    
    # define the search space and the objecive function
    def objective(trial):
        param_sp = {
            'base_score'            : 0.5, 
            'booster'               : 'gbtree', 
            'colsample_bylevel'     : trial.suggest_categorical('colsample_bylevel',[0.7,0.8,0.9]),        
            'colsample_bynode'      : trial.suggest_categorical('colsample_bynode',[0.7,0.8,0.9]),
            'colsample_bytree'      : trial.suggest_categorical('colsample_bytree',[0.7,0.8,0.9]),
            'gamma'                 : trial.suggest_categorical('gamma',[0.0000001,0.000001,0.00001,0.0001,0.001,0.01,0.1,0.3,0.5,0.7,0.9,1,2,3,4,5,6,7,8,9,10]),    
            'learning_rate'         : trial.suggest_categorical('learning_rate',[0.1]),
            'max_delta_step'        : trial.suggest_categorical('max_delta_step', [0,1,2,3,4,5,6,7,8,9,10]),     
            'max_depth'             : trial.suggest_categorical('max_depth', [10]),
            'min_child_weight'      : trial.suggest_categorical('min_child_weight', [1,3,5,7,9,11,13,15,17,19,21]),
            'objective'             : 'binary:logistic', 
            'reg_alpha'             : trial.suggest_categorical('reg_alpha', [0.000000001,0.00000001,0.0000001,0.000001,0.00001,0.0001,0.001,0.01,0.1,1,10,100]),
            'reg_lambda'            : trial.suggest_categorical('reg_lambda', [0.000000001,0.00000001,0.0000001,0.000001,0.00001,0.0001,0.001,0.01,0.1,1,10,100]),
            'scale_pos_weight'      : trial.suggest_categorical('scale_pos_weight', [ratio1,1,10,20,30,40,50,60,70,80,90,100,1000]),        
            'seed'                  : 42, 
            'subsample'             : trial.suggest_categorical('subsample', [0.5,0.6,0.7,0.8,0.9]),        
            'verbosity'             : 1, 
            'tree_method'           :'auto',            
            'predictor'             :'cpu_predictor', 
            'eval_metric'           :'error'
        }
    
        #Add the pruning Call Back
        pruning_callback = optuna.integration.XGBoostPruningCallback(trial, "validation-error")
    
        #Perform validation
        xgb_bst=xgb.train(param_sp,dtrain,num_boost_round=1000,evals=[(dtest, "validation")],early_stopping_rounds=100,verbose_eval=False,callbacks=[pruning_callback])    
    
        # Set n_estimators as a trial attribute
        trial.set_user_attr("n_estimators", xgb_bst.best_ntree_limit)
    
        # Extract the best score.
        preds = xgb_bst.predict(dtest)
        pred_labels = np.rint(preds)
        f1 = metrics.f1_score(y_test, pred_labels)
        return f1
    
    pruner = optuna.pruners.MedianPruner(n_startup_trials=5, n_warmup_steps=20, interval_steps=10)
    study = optuna.create_study(study_name='XGB_Optuna_0.1_max_depth_10_Error_Val_500_trials',direction='minimize',sampler=TPESampler(consider_magic_clip=True,seed=42,multivariate=False),pruner=pruner)
    
    # perform the search
    print('\nPerforming Bayesian Hyper Parameter Optimization..')
    study.optimize(objective, n_trials=500,n_jobs=16)
    1 reply
    Ghost
    @ghost~5ff04c7ad73408ce4ff7d2aa
    Hello, just wanted to introduce myself here. My name is Aryan and I am currently doing my undergrad in CS. I have already started contributing towards optuna, to take part in GSOC '21 and I have to say this has been one of the most interesting project I have ever been part of.
    2 replies
    FR8803
    @FR8803
    Hey guys I'm currently trying to optimize the hyperparameters of a deep-q reinforcement learning model implemented with tf agents. It is based on an OpenAI gym environment, let's say for example "Cartpole-v0". So far I haven't found any examples of implementation with Optuna. Do you know of any code examples on github and could you share any ideas on how to approach this problem? Thanks a lot in advance!
    4 replies
    razou
    @razou

    Hello
    I'm trying to visualize the study output in jupyter notebook

    optuna.visualization.plot_optimization_history(study)
    optuna.visualization.plot_slice(study)
    optuna.visualization.plot_contour(study, params=['epochs', 'learning_rate'])

    Nothing happen when I run these commands.

    Has anybody tested to do some visualization in similar environment ?

    5 replies
    Ghost
    @ghost~5ff04c7ad73408ce4ff7d2aa
    Are the rdb storage tests from circleCI still relevant or have the become outdated ? Asking since they seem to be removed from the docs.
    2 replies
    Mahmoud Abdelkhalek
    @mhdadk
    Is there a rule of thumb for how to choose the number of epochs per trial?
    2 replies
    Miguel Crispim Romao
    @romanovzky
    Hi all, I have a question. I'm training a regressor with Keras, and my objective is the R2, which is positive semi-definite and my goal is to maximise it. The R2 score is calculated at the end of the training for each HP combination. However, I want to use the MedianPrunner that should be monitoring the val_loss, which is supposed to be minimised. How can I be sure that the prunner is minimising the val_loss while still having a maximise optimisation step?
    3 replies
    Francisco Villaescusa-Navarro
    @franciscovillaescusa
    Hi. When using optuna in parallel (e.g. 4 GPUs running on different terminals with the same common database), how does n_trials and n_startup_trials behave? 1) Will the 4 GPUs run 50 trials in total or 200 trials in total? 2) Will the random sampling stop after n_startup_trials in total or after each GPU has carried out n_startup_trials? Thanks!
    6 replies
    2403hwaseer
    @2403hwaseer
    Hi! I am Harman Waseer from IIT Roorkee and I am looking forward to participating in GSoC' 21. I have read about the projects and I am interested in working on the Web Dashboard project. Can someone guide me on how to get started? Thanks!
    4 replies
    viiids
    @viiids
    Hi, I have a question around customizing acquisition functions. Is it possible to do this in Optuna, essentially I want to continue using SingleTaskGP or whatever model Optuna uses along with the acquisition function, however I want to sample many point and then run another pass ranker to sort them using an extra function. This final list is what I want to sample values from. In order to audit the information, I have also created a ticket: optuna/optuna#2339. Feel free to reply to that
    Jyot Makadiya
    @jeromepatel
    Hello,
    I am going to apply for Optuna in GSOC 2021. as @Crissman suggested, I have submitted my first PR #2346, thank you @toshihikoyanase and Kento Nozawa for helping me with that. I am interested in working on a sampling of samplers projects. Related to that I am currently working on the issue: optuna/optuna/#2233.
    With reference to that issue, I had one question that if I modify the sampler python file for samplers eg., _cmaes.py, then how can I test my local changes when I implement a new function(eg after_trial), I think I am aware of tests/samplers_test, but I am not sure how can I use that? Any suggestions and guidance are welcome. Thank you!!
    6 replies
    Dário Passos
    @dario-passos
    Hi everyone. I'm starting using optuna in a project related to the use of Conv. Neural Nets in chemometrics and I was wondering if there is any video or tutorial that shows how to deploy/use the new optuna-dashboard. I'm used to run my experiments in a Jupyter notebook and so far I haven't figured out how to launch dashboard. Thanks to all the Optuna community for a great piece of software.
    5 replies
    Peter Cotton
    @microprediction
    Hi all. I've been using Optuna and also benchmarking it. I'm trying to figure out how to choose a good collection of option choices and tweaks, so I can try them all against my problems. My current effort is at https://github.com/microprediction/humpday/blob/main/humpday/optimizers/optunacube.py
    3 replies
    By the way I also wrote a small package to compare optimizer performance, albeit in a somewhat limited way focussed on my domain. There is an article at https://www.microprediction.com/blog/humpday and feedback is welcome. As I don't claim to be an optuna expert I suspect some tweaking would help.
    That said, optuna is doing well.
    braham-snyder
    @braham-snyder

    Hi -- when running multiple processes in distributed mode on a single machine, how should I choose n_jobs?

    My guess is -1 or maybe 1, but I'm not even certain of that.

    2 replies
    braham-snyder
    @braham-snyder
    I should clarify my objective is CPU-bound and GIL-locked -- n_jobs=-1 w/o distributed storage uses only 3/16 cores
    Crystal Humphries
    @CrystalHumphries
    is there a way to tweak optuna so that one can vary the set of values tested during each acquisition function? That is, instead of testing one new set of parameters at a time, i would prefer to test >=3 at once?
    3 replies
    Hamza Ali
    @ryzbaka
    Hi everyone, my name's Hamza (https://github.com/ryzbaka). I'm experienced in full-stack web development, data engineering, and computational statistics. I'm interested in working on the Optuna web dashboard for GSOC 2021.
    3 replies
    I'd like to know more about the process of getting started with the Optuna codebase. Based on the document here, should I get started with figuring out how to port the dashboard to TS or is there something else that I'm missing?
    Luca Ponzoni
    @luponzo86
    Hi, this may be a stupid question. I'm trying to use multi-objective optimization in Optuna 2.5.0 and I'm not sure how to turn off pruning to avoid the error: NotImplementedError("Trial.report is not supported for multi-objective optimization.")
    7 replies
    Dmitry Selivanov
    @dselivanov
    Hi Folks. I've tried to google, but could not find solution. I have a proxy loss function which is loss = weight_1 * loss_component_1 + weight_2 * loss_component_2 + ... . And constraint sum(weight_i) = 1; all(weight_i) > 0 & all(weight_i) < 1. I want to find optimal combination of weight_i. So essentially I need to sample parameters from multinoulli distribution. Of course I can sample parameters from uniform and then normalize them, but I don't feel this is the right way.
    7 replies
    esarvestani
    @esarvestani

    Hi everyone. I am going to use Optuna for hyperparameter optimization of an iterative process in which the number of samples increases by iterations. I start Optuna from scratch for iteration 0, but for the next iterations I use accumulated trials from all previous iterations. With this warm-up scheme after some iterations the search space becomes so small and it concentrates on a very small region in the parameter space. Now, I need to give it the chance to look into other regions in the parameter space after a few iterations. One idea that I have is to force it to forget the trials from long time ago, for example when it starts iteration 5 I want to ignore the trials from iteration 0 and 1 and so on. To do so I use this piece of code to manually change the state of those trials from 'COMPLETE' to 'FAIL'; with this when the 'study' is loaded only the trials with state='COMPLETE' are taken into account.

    def makefailSqliteTable(storage):
        try:
            sqliteConnection = sqlite3.connect(storage)
            cursor = sqliteConnection.cursor()
            sql_update_query = """Update trials set state = 'FAIL' """
            cursor.execute(sql_update_query)
            sqliteConnection.commit()
            cursor.close()
        except sqlite3.Error as error:
            print("Failed to update sqlite table", error)
        finally:
            if (sqliteConnection):
                sqliteConnection.close()
                print("The SQLite connection is closed")
    
    def updateSqliteTable(storage, N):
        try:
            sqliteConnection = sqlite3.connect(storage)
            cursor = sqliteConnection.cursor()
            df = pd.read_sql_query("SELECT * from trials", sqliteConnection)
            sql_update_query = """Update trials set state = 'COMPLETE' where number > """ + str(len(df)-N)
            cursor.execute(sql_update_query)
            sqliteConnection.commit()
            cursor.close()
        except sqlite3.Error as error:
            print("Failed to update sqlite table", error)
        finally:
            if (sqliteConnection):
                sqliteConnection.close()
                print("The SQLite connection is closed")

    I would like to know whether this procedure does the thing that I want. I mean, does it really forget the history from long time ago?

    2 replies
    Hiroyuki Vincent Yamazaki
    @hvy
    Thanks for all your contributions, we’ve just released v2.6.0. Please check out the highlights and release note at https://github.com/optuna/optuna/releases/tag/v2.6.0 or via the Tweet https://twitter.com/OptunaAutoML/status/1368818695250669570. In short, it contains
    • Warm starting CMA-ES and sep-CMA-ES support
    • PyTorch Distributed Data Parallel support
    • RDB storage and heartbeat improvement
    • Pre-defined search space with ask-and-tell interface
    SM-91
    @SM-91
    Hi everyone, I want to know if we can retrieve a study csv file from the study db? is it possible?
    2 replies
    emaldonadocruz
    @emaldonadocruz
    Howdy!, My name is Eduardo, and I am using optuna for an optimization problem. I am using a search space, and I would like to return more than one metric from the objective. When I try to do so, like in the example below, I get the following error message: "Trial 2 failed, because the number of the values 3 does not match the number of the objectives 1."
    def objective(trial):
    
        '''
        Instantiate model and evaluate
        '''
        Metric 1 = 
        Metric 2 = 
    
        '''
        Get additional metrics
        '''
        Metric 3
    
        return [Metric 1, Metric 2, Metric 3]
    
    d_space = np.linspace(0.05, 0.95, 2)
    l_space = np.linspace(0.0001, .0002 , 2)
    search_space = {"D": d_space, "L": l_space}
    
    study = optuna.create_study(sampler=optuna.samplers.GridSampler(search_space))
    
    study.optimize(objective,
                   n_trials=d_space.shape[0] * l_space.shape[0],
                   show_progress_bar=True)
    3 replies
    P4tr1ck99
    @P4tr1ck99
    Hi everyone, I'm a Rookie at optuna and I would like to know how I can change the evaluation metrics for finding the most fitting hyperparameters. If I understood it corectly, the metric for on which optuna decide if a hyperparameter set is a good one is the accurency. Instead of the accurency I would prefere to use the F1-Score or the Recall.
    How could that be implemented with optuna?
    2 replies
    Chris Fonnesbeck
    @fonnesbeck
    I'm running into an issue trying to optimize the hyperparameters for a TF Estimator model (specifically a DNNClassifier). When I set up and run an Optuna study it quickly uses up all of my session's resources and crashes (this is using either a high-memory GPU Colab session or an AWS Deep Learning AMI). I haven't had this problem using non-Estimator TF models, nor does it occur when I run my model outside of Optuna, so I'm wondering if there is something special that needs to be done with them.
    17 replies
    Dário Passos
    @dario-passos
    Hi! Is there a way of changing the color map that optuna.visualization.plot_contour uses by default? Thanks!
    3 replies
    Robin-des-Bois
    @Robin-des-Bois

    Hi :-)
    Is there predefined way to nest trial parameters?
    I would like to pass a trial object into a function and all the parameters that get added inside this function should be prefixed automatically by a string that I specify.

    I imagine the interface to look something like this but did not find something similar in the API:

    def configure_subsytem_a(trial: optuna.Trial) -> SubSystemA:
        n_params = trial.suggest_int("n_params", 1,3)
        return SubSystemA(n_params)
    
    trial = ...
    
    subsystem_a = configure_subsytem_a(trial.withPrefix('subsystem_a'))

    This should result in a conifg like this:

    {
        'subsystem_a.n_parmas': 3
    }

    It would be quite easy to build this functionality myself, by wrapping the trial object, but if functionality like this is provided, I would prefer to use that.

    1 reply
    MaximilianSamLickeAgdur
    @MaximilianSamLickeAgdur

    Hi,

    What is the preferred way of dealing with trial/suggestions that have ranges that depend on each other, see below callfunction...
    is the preferred way to do as below or is it better to set like a central value and penalize values outside of range?
    Does this method even work with optuna, sampler being used: TPE. Are certain samplers better at this?
    Tips on literature?

    class Objectiveoptim(object):

    def init(self, idmodelsdict, value):
    self.idmodelsdict = idmodelsdict
    self.value = value

    def call(self, trial):
    totalfactor = 0
    totalvalueused = 0
    valuedict = dict()

    for id_, model in self.idmodelsdict.items():
        model.eval()
    
        valuedict[id_] = trial.suggest_float(id_, 0, self.value - totalvalueused)
        totalvalueused += valuedict[id_]
        totalfactor += model(torch.tensor([valuedict[id_]], dtype=torch.float32))
    
    return totalfactor
    1 reply
    Francesco Carli
    @mr-fcharles_gitlab

    Hi,

    I'm having difficulties in understanding how i can use the command

    optuna.visualization.plot_param_importances(study)

    to visualize hyperparameters importance while performing multiple objectives optimization. I undestand that I should specify the metric wrt which I want the importances to be computed but I don't understand how to do so.

    Thanks in advance!

    7 replies
    Dário Passos
    @dario-passos
    Hey everyone. I've been using Optuna-dashboard for a couple of weeks now and I'm detecting a weird behaviour. I'm using Optuna 2.6 to optimize the hyperparameters of a relatively small (5 to 8 layers) tensorflow/keras convolution neural network in a Jupyter notebook and optuna-dashboard 0.3.1 (SQLAlchemy 1.3.22) to monitor the evolution of the optimization. My default browser is Chrome (version 89.0.4389.82) and my OS is Windows 10. The strange behaviour that I've started noticing is a very high RAM consumption by optuna-dashboard after a certain number of trial in my optimization studies, much larger than the database file being created by Optuna. For example I have a study.db file that has roughly 11Mb corresponding to 1563 trials points. Displaying this on the browser gobbles up around 4 Gb of RAM and this is if shutdown optuna-dashboard and reload the study.db from scratch. When I continuously monitor the optimization experiment from the beginning, optuna-dashboard reaches around 6 Gb of RAM (for the exactly same file). Around trial 500 (more or less), the browser starts to get unresponsive or very low in terms of selection buttons, etc. This difficults the results analysis and is quite annoying... I report this same behaviour in two different PCs with different graphic cards and memory configurations. My CPUs and GPUs are running always below 60% performance, so lack of resources do not seem to be the cause. Is this suppose to happen? What can I do make the process run faster?
    1 reply
    yywangvr
    @yywangvr
    import optuna
    
    def objective(trial):
        x = trial.suggest_float("x", 0, 5)
        y = trial.suggest_float("y", 0, 3)
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
        v=v0+v1
        return v
    Hello!
    Is it possible to log the value of v0 and v1 in some functions such asstudy.trials_dataframe. As I also wish to analyse the intermediate values from the objective function.
    Miguel Crispim Romao
    @romanovzky
    Hi all. Quick question, when using optuna.multi_objective.samplers.NSGAIIMultiObjectiveSampler, which has by default 50 trials per generation, if I set the study to perform 1000 trials do I assume correctly that there will be 20 generations?
    3 replies
    yywangvr
    @yywangvr
    import optuna
    
    def objective(trial):
        x = trial.suggest_float("x", 0, 5)
        y = trial.suggest_float("y", 0, 3)
    
        v0 = 4 * x ** 2 + 4 * y ** 2
        v1 = (x - 5) ** 2 + (y - 5) ** 2
        v=v0+v1
        return v

    Hello!
    Is it possible to log the value of v0 and v1 in some functions such asstudy.trials_dataframe. As I also wish to analyse the intermediate values from the objective function.

    Solution is given by optuna author here: https://github.com/optuna/optuna/issues/2520#issuecomment-806578752