## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
dumbjarvis
@dumbjarvis
Does optuna work on graphs? Like PyTorch geometric?
To be specific, tune params for community detection on graphs.
dlin1
@kdlin
The following command, optuna.visualization.plot_optimization_history(study), works on Jupyter notebook but not on Jupyer lab. What should I do to make it work on Jupyter lab? Thank you.
3 replies
MaximilianSamLickeAgdur
@MaximilianSamLickeAgdur
This message was deleted
3 replies
esarvestani
@esarvestani
Hi everyone. I want to use optuna for a simple multi-objective optimization problem. It works well, the only problem is that the process of retrieving of best trials takes too much time, even much more than the optimization itself. It's not clear for me why this simple python assignment should take so much time. This is a piece of code to do that, in which I have a function in two dimensions (x, y) and I want to find the value of y that minimizes the function for a given x.
def multi_objective(trial, X):
y = trial.suggest_float("y", -5., 5.)
result = []
for x in X:
result.append((x**2+y-11)**2 + (x+y**2-7)**2)

return result

optuna.logging.set_verbosity(optuna.logging.WARNING)
Y = np.linspace(start=-5., stop=5., num=500)
study = optuna.create_study(directions=['minimize']*len(Y))
study.optimize(lambda trial: multi_objective(trial, Y), n_trials=1000)

# the next line takes too much time to run
bests = study.best_trials

best_values = [trial.values for trial in bests]
best_params = [trial.params for trial in bests]
2 replies
Pedro Vítor
Hi there! I have some hyperparameters that are dependent on others. For example, I'm testing two different learning rate schedulers in AllenNLP: slanted_triangular and linear_with_warmup, and they each have their own hyperparameters. How can I tie these related hyperparameters to values that are related to each other? Also, is there any way to improve the sampler configuration to take these dependencies into consideration, to prevent combining and evaluating unnecessary scenarios?
6 replies
MaximilianSamLickeAgdur
@MaximilianSamLickeAgdur
Like above i have hyperparameters that are dependent on each other. Especially the last hyperparameter in the loop is dependent on all other hyperparameters, will the below code example be a problem? chosen sampler tpe(multivariate)
class Randomoptim(object):
def init(self, modelsdict):
self.modelsdict = modelsdict
self.value = 1.0
def __call__(self, trial):
totalfactor = 0
valuedict = dict()

for id_, model in self.modelsdict.items():
model.eval()
if id_ == list(self.modelsdict.keys())[-1]:
valuedict[id_] = trial.suggest_float(id_,
self.value,
self.value)
else:
valuedict[id_] = trial.suggest_float(id_,
0,
self.value)
self.value -= valuedict[id_]

totalfactor += model(valuedict[id_])
return totalfactor
3 replies
Dmitry
@Akkarine
Hello! Is there any way to prune study from failed/staled in RUNNING state trials?
Only manually in database?
Dmitry
@Akkarine
2 replies
Well, discovered, that you shouldn't do it, because of corruption for sampler history)
Pedro Vítor
I'm running a study with AllenNLP optuna testing 9 hyperparameters, with 8 categoricals and a float from 1 to 5 (with step 1). This would yield me around 576k different combinations. I'm around trial 26 now, and for some reason, trials from 21 to 25 were all using the exact same hyperparameters, which were "copied" from trial 11, which is the best one. Whis is this happening? Why isn't optuna trying other combinations? I'm using the default TPESampler (also default parameters) and SuccessiveHalvingPruner with min_resource 5 (other parameters are also default). I'm also running two processes, one for each GPU, but both sharing the same optuna database.
5 replies
Pedro Vítor
If i'm using a particular metric for a study, how do I setup the study so trials get pruned unless they reach a minimum x score after y epochs?
6 replies
TKouras
Hello, Im trying to do some regressions with optuna and after the 50 trials, i want to keep the best r2_score and a scatter plot of "actual vs predicted values". If I do the process inside the objective trial it will return 50 scatter plots, but i only want the best one. Is there any way you can do this?
4 replies
Alexander_Konstantinidis
@AlexanderKonstantinidis

Hi, I am running the following code and I get an error message, could you please help?
import optuna.integration.lightgbm as lgbm
best_params, tuning_history = dict(), list()
booster = lgbm.train(params, dtrain, valid_sets=dval,
verbose_eval=0,
best_params=best_params,
tuning_history=tuning_history)

The error message is:
TypeError Traceback (most recent call last)

<ipython-input-28-c0d324367a3c> in <module>
4 verbose_eval=0,
5 best_params=best_params,
----> 6 tuning_history=tuning_history)
7

~\Anaconda3\envs\tf2\lib\site-packages\optuna\integration_lightgbm_tunerinit.py in train(args, kwargs)
32 _imports.check()
33
---> 34 auto_booster = LightGBMTuner(args,
kwargs)
35 auto_booster.run()
36 return auto_booster.get_best_booster()

TypeError: init() got an unexpected keyword argument 'best_params'
Thank you.

Hiroyuki Vincent Yamazaki
@hvy

As always, thanks for all the feedback and contributions. We’ve just released v2.8.0 with several interesting features and improvements.

🙊Constant Liar (CL) for TPE improves distributed search
🌳Tree-structured search space support for multivariate TPE
🪞Copying Studies across storages
📞Callbacks to re-run a pre-empted trial

Check out the highlights and release notes at https://github.com/optuna/optuna/releases/tag/v2.8.0 or with the Tweet https://twitter.com/OptunaAutoML/status/1401799603154939908.

Patshin_Anton
@paantya

Hi All!

Can you please tell me if it is possible to run optuna.pruners together with n_jobs = 10 and how best to do it to get acceleration?

5 replies
Miguel Crispim Romao
@romanovzky
Hi all. I have an objective function that has a lot of invalid regions, which I inform the study by returning a nan. I notice that the TPE spends a lot of time/trials around invalid regions after a while. Is this by construction due to the "uncertainty driven" GP logic? Is there a way of preventing the TPE to spend so long trying points with high uncertainty due to previous nan in the vicinity? Cheers
5 replies
Hello,
I have a question about hyperparameter importance. When I use optuna.importance.get_param_importances(study) and optuna.visualization.plot_param_importances(study) , both on the same single-objective study and with default parameters, the results differ. Why it is like that? or maybe I am doing sht wrong? I tried to look for an explanantion in optuna doc but haven't found anything.
Krishna Bhogaonker
@00krishna
Hello there optuners, this is my first time using optuna and it works well except for one small thing. I am not exactly sure how to explain this. So I use Weights and Biases website (wandb.com) to track all of the model runs--the logs. And I am using pytorch-lightning to run my model. Now I am successfully able to create the objective() function and run the training, but it seems that the log on Weights and Biases is getting overwritten. So instead of different runs or experiments for each trial, I just see one really funky trial with training error and losses everywhere--because everything is mixed together.
Krishna Bhogaonker
@00krishna
Here is the code if it helps. I am not sure if there is a way to tell optuna to initiate a new experiment each time the trial runs.
def objective(trial: optuna.trial.Trial) -> float:

input_window_size = trial.suggest_categorical("input_window_size", [20, 30, 40])
output_window_size = trial.suggest_categorical("output_window_size", [ 20, 30, 40])
test_percentage = 0.20
val_percentage = 0.20
lags = [1, 2, 3, 365, 366, 367]
lag_combos = list(powerset(lags))
laglist = trial.suggest_categorical("lags", lag_combos)
drop_columns = ['pr', 'tmmx']
drop_column_combos = list(powerset(drop_columns))
remove_columns = trial.suggest_categorical("remove_columns", drop_column_combos)
batch_size = trial.suggest_categorical("batch_size",[16, 32, 64, 128])

datamodule = get_dataset('Sensor1')
datamodule = datamodule(input_window_size=input_window_size,
output_window_size=output_window_size,
test_percentage=test_percentage,
val_percentage=val_percentage,
laglist=laglist,
remove_columns=remove_columns,
batch_size=batch_size)

datamodule.prepare_data()
datamodule.setup()

hidden_dim = trial.suggest_categorical("hidden_dim", [16, 32, 64, 128])
num_layers = trial.suggest_int("num_layers", 1, 3)
dropout = trial.suggest_float("dropout", 0.2, 0.5, step=0.1)

optimizer_name = trial.suggest_categorical("optimizer", ["Adam", "RMSprop", "SGD"])
lr = trial.suggest_uniform("lr", 1e-5, 1e-1)

model = LitDivSensor(num_features = datamodule.num_features,
hidden_dim = hidden_dim,
num_layers = num_layers,
dropout = dropout,
debug = False,
learning_rate = lr,
batch_size = batch_size,
optimizer_name=optimizer_name)

tb_logger = pl_loggers.TensorBoardLogger('logs/', name='division-bell-rnn')
wandb_logger = pl_loggers.WandbLogger(name='division-bell-rnn'+ str(date_time),
save_dir='logs/',
project='division-bell',
entity='****',
offline=False)
trainer = pl.Trainer(
logger=[tb_logger, wandb_logger],
checkpoint_callback=False,
max_epochs=EPOCHS,
gpus=1 if torch.cuda.is_available() else None,
callbacks=[PyTorchLightningPruningCallback(trial, monitor="val_loss")])

hyperparameters = dict(input_window_size=input_window_size,
output_window_size=output_window_size,
laglist=laglist,
remove_columns=remove_columns,
batch_size = batch_size,
hidden_dim=hidden_dim,
num_layers=num_layers,
dropout=dropout,
optimizer_name=optimizer_name,
learning_rate=lr)

trainer.logger.log_hyperparams(hyperparameters)
trainer.fit(model, datamodule=datamodule)

return trainer.callback_metrics["val_loss"].item()
Philipp Dörfler
@phdoerfler
'sup! What RDBs does optuna support? The docs mention sqlite, mysql and postgresql but I can't find a complete list. I assume it uses some different library for the DB access which would have said list, right? Which one? Thanks!
alternatively: What exactly is the issue with sqlite and running optuna in parallel? I'm planning on running at most 4 instances of optuna in parallel (the machine has 4 GPUs). Is accessing sqlite simultaneously merely not as efficient or will it introduce inconsistencies? I'd be completely OK with, e.g., locking happening, that's still plenty fast for my use case obviously. I'm also on a system that isn't maintained by myself, so I can't easily install postgresql or some other RDB.
For that reason I was also looking into portable versions of Postgresql that would be self contained in a single directory but only found something for Windows (and this is a ubuntu machine I'm talking about)
Philipp Dörfler
@phdoerfler
fwiw I just realised that in https://optuna.readthedocs.io/en/stable/faq.html#how-can-i-use-two-gpus-for-evaluating-two-trials-simultaneously the code suggests that it is OK to use sqlite from multiple instances (on the same machine). Is this viable or just an oversimplification for the sake of easy to read documentation?
Renato Hermoza Aragonés
@renato145
Im using postgres while training in kubernetes and slurm cluster
I have one question, I have some clusters with limited time for execution, how do I resume a trial ?
2 replies
Shalini
@tomshalini
Hello everyone,
I want to plot history of trials using "plot_optimization_history". It is not working with multi-objectives. Is there something I can try to visualize multi objective sampler results?
2 replies
Evgeny Frolov
@evfro

Hi everyone. I have a three-level question about the best practices for skipping trials.

1. Is it possible to prohibit sampling of certain combinations of values from hyper-parameter distributions? I have a complex search space and there's an interplay between sampled hyper-parameter values. As an example, after params a, b, c are sampled the trial is allowed to run only if, e.g., a * b > c.

2. What is the most appropriate way to skip a trial before performing main calculations in an objective (e.g., defining model, running training, etc.)?
Currently, once I made all the necessary trial.suggest_* calls I also call my custom is_valid(trial.params) function, which tells me if the trial is allowed to run or not. Naïve implementation is to just call is_valid function within the objective body. But I would like to detach the objective from verification of trial parameters. I compose many models with different objectives and with different restrictions of the search space, so I'd like to avoid adding is_valid(trial.params) call to all objectives and would prefer a cleaner solution, something like a pre-trial check, so that I only define it once and register it for any objective whatever it is. Does optuna has such functionality? Or do I have to go with closures/decorators/custom objective classes?

3. After the way to skip a trial is found, what would be the correct approach for actually skipping it in the study? I'm specifically interested in the case of TPESampler, which uses history of trials for its next steps. There seems to be at least 2 ways to do so: either raise optuna.TrialPruned() if trial should be skipped, or generate a custom exception like raise ConfigError and add it to the catch list of study.optimize call. Which one is more appropriate in terms of ensuring the correct work of TPESampler?

Thanks!

8 replies
Khawaja Umair Ul Hassan
@umairkhawaja
I modified the pytorch simple code example given in this repo and replaced the training/validation loop with a code similar to the sample training loop given in a classifier training example in the pytorch docs. However, the study ends after a single trial. I'm assuming this has something to do with the return accuracy statement at the end of the objective function. Can someone explain the purpose of this statement, or better if someone has any idea what I could be doing wrong?
ik9999
@ik9999
Hello. I use remote pgsql database as a storage. And it really slow because i add many user attributes one by one. Is there a way to insert them all at once?
7 replies
ik9999
@ik9999
Another question, is there a sampler which goes through the all sets of parameters starting with the most diverse ones (most different from each other)?
Hiroyuki Vincent Yamazaki
@hvy
We just started a user survey about Optuna, how we're all using it and how it can be improved. Any feedback it really appreciated. We'll try to incorporate those into the coming releases, long and short term. 🙇‍♂️ https://docs.google.com/forms/d/e/1FAIpQLScv7Lz7ckbDxdFYHnnXiBDUfPj-cOp9csNbYLuXg31GTj1cTA/viewform
A.G.
@Divide-By-0
hey is there a way for optuna to print the found sensitivities of hyperparameters so i know what is most important to change in the future?
3 replies
A.G.
@Divide-By-0
How does PyTorchLightningPruningCallback utilize all the graphs of past train losses (or whatever the monitor variable is set to)? If I set the optuna objective() function to return a different parameter (i.e. validation error), will the monitor argument get confused, or still compare current train loss against past train losses?
Paulo Lacerda
@placerda
Hello, is it possible to re-run a trial that failed because I stopped program execution? When I run optimize it does not try to execute that trial again.
3 replies
Miguel Crispim Romao
@romanovzky
Hi all. I was wondering if there was a quick way to interrupt a study once I have a number of trials that satisfy a condition. For example, I might set a certain boolean user_attr inside the objective function. After a certain number of n trials fulfil that condition, I want to stop. What would be the recommended way of going about this? I was thinking about accessing the trial.study attribute, as it is stateful of the whole optimisation (is this true?) and then invoque trial.study.stop() method after parsing the trial.study.trials to count the my user_attr values. Would this be a good idea?
A.G.
@Divide-By-0
I accidentally ran a trial with objective = maximize instead of objective = minimize, can I switch it now and load from the same db so it can use the found sensitivities? I get ValueError: Cannot overwrite study direction from [<StudyDirection.MAXIMIZE: 2>] to [<StudyDirection.MINIMIZE: 1>]. when I try
I.e. can I just overwrite the study_directions table manually to have minimize
Philip May
@PhilipMay
I would suggest to create a new study with right direction and then enqueue all trials from your wrong study to the new.
A.G.
@Divide-By-0
Cool! Will try
SurajitTest
@SurajitTest
Hi, I am using xgboost 1.4.0 and optuna 2.8.0. I am using a RHEL 7.1 machine that has 64 processors. I am trying to use 60(out of 64) processors both for xgboost and optuna(study.optimize). The code did run for 8 hours and there is no output. Can someone pls help
# Import data into xgb.DMatrix form
dtrain = xgb.DMatrix(X_train,label=y_train)
dtest = xgb.DMatrix(X_test,label=y_test)
#Set parameters
n_splits=5
random_state=42
tree_method='auto'
predictor='cpu_predictor'
eval_metric='aucpr'
grow_policy='depthwise'
booster='gbtree'
base_score=0.5
objective_ml='binary:logistic'
verbosity=1
early_stopping_rounds=100
num_boost_round=50000
parallel_jobs=60
#Instantiate the stratified folds
skfolds = StratifiedKFold(n_splits=n_splits,shuffle=True,random_state=random_state)
# define the search space and the objecive function
def objective(trial):
# Define the search space
param_sp = {
'base_score'            : base_score,
'booster'               : booster,
'colsample_bytree'      : trial.suggest_discrete_uniform('colsample_bytree',0.7,0.85,0.05),
'learning_rate'         : trial.suggest_loguniform('learning_rate',0.01,0.1),
'max_depth'             : trial.suggest_int('max_depth', 4,8,1),
'objective'             : objective_ml,
'scale_pos_weight'      : trial.suggest_uniform('scale_pos_weight',1,100),
'subsample'             : trial.suggest_discrete_uniform('subsample',0.5,0.85,0.05),
'verbosity'             : verbosity,
'tree_method'           : tree_method,
'predictor'             : predictor,
'eval_metric'           : eval_metric,
'grow_policy'           : grow_policy
}
#Perform Native API cross validation
xgb_cv_results=xgb.cv(param_sp,dtrain,stratified=True,folds=skfolds,metrics=eval_metric,
num_boost_round=num_boost_round,early_stopping_rounds=early_stopping_rounds,
as_pandas=True,verbose_eval=False,seed=random_state,shuffle=True)
# Set n_estimators as a trial attribute
trial.set_user_attr("n_estimators", len(xgb_cv_results))
#Obtain the number of estimators
n_estimators=len(xgb_cv_results)
#Print the params selected in the trial
#Create the params set for obtaining the cross validation score
params_cv={
'learning_rate': trial.params['learning_rate'],
'subsample': trial.params['subsample'],
'colsample_bytree': trial.params['colsample_bytree'],
'max_depth': trial.params['max_depth'],
'scale_pos_weight': trial.params['scale_pos_weight'],
'base_score':base_score,
'booster':booster,
'objective':objective_ml,
'verbosity':verbosity,
'tree_method':tree_method,
'predictor':predictor,
'eval_metric':eval_metric,
'grow_policy': grow_policy,
# Specific sklearn api variables
'random_state':random_state,
'n_jobs':parallel_jobs,
'n_estimators':n_estimators,
'use_label_encoder':False}
#Instantiate the XGB Estimator
xgb_estimator=XGBClassifier(**params_cv)
#Obtain the cross validation score - to be used by the trial to rate models
cv_score=cross_val_score(xgb_estimator, X_train, y_train,scoring='f1',cv=skfolds,n_jobs=parallel_jobs).mean()
return cv_score
#Create the Study
study = optuna.create_study(study_name='XGB',direction='maximize',sampler=TPESampler(consider_magic_clip=True,seed=random_state,multivariate=True))
# perform the search
n_trials=360
study.optimize(objective, n_trials=n_trials,n_jobs=parallel_jobs)
4 replies
A.G.
@Divide-By-0
Is there a way to have optuna print the hyperparam importances on plotly to a specific localhost port? right now when i do fig = optuna.visualization.plot_param_importances(study) fig.show(), it chooses localhost:<randomport>
or at least print the port at which it prints to console, so I can ssh forward it
A.G.
@Divide-By-0
I managed to have fig.show output to an svg, so resolved
Is there a way to set optuna default hyperparams; i.e. optuna/optuna#1855, so at the start of an empty study it runs a trial with those hyperparams which I know are pretty good, and then searches knowing theres some pretty decent point there? would that even help the algorithm