I think that depends on your hyperparameter. If it is discrete,
suggest_discrete_uniform is usually suitable for it.
Of course, you can manually discretize the return value of
suggest_float, but I don't think it is a good approach since the observation (= history of hyperparameters) is different from the values used actually.
suggest_discrete_uniform, largely because it's more interpretable
x=0.98after optimization, is it actually just converging to
[0.0, 0.1, 0.2...], it would perform substantially worse, right?
suggest_categoricallikely to show worse performance than
suggest_discrete_uniformbecause the samplers cannot see the distance between values.
(psycopg2.errors.StringDataRightTruncation) value too long for type character varying(2048) [SQL: INSERT INTO trial_system_attributes (trial_id, key, value_json) VALUES (%(trial_id)s, %(key)s, %(value_json)s) RETURNING trial_system_attributes.trial_system_attribute_id]
optimizer_str = pickle.dumps(optimizer).hex() study._storage.set_trial_system_attr(trial._trial_id, "cma:optimizer", optimizer_str)
Hi there, I have a issue with optuna.intergration.lightgbm.
If I ran bellow code, got TypeError.
I looked up similar issue, but I didn't found.
from optuna.integration import lightgbm as lgb models =  best_params_list =  best_params, tuning_history = dict(), list() for train_index, valid_index in skf.split(X_train_valid.values, y_train_valid.values): lgb_train = lgb.Dataset(X_train_valid.iloc[train_index, ], y_train_valid.iloc[train_index, ]) lgb_valid = lgb.Dataset(X_train_valid.iloc[valid_index, ], y_train_valid.iloc[valid_index, ], reference=lgb_train) model_ = lgb.train( lgbm_params, lgb_train, valid_sets=lgb_valid, num_boost_round=1000, early_stopping_rounds=100, verbose_eval=10, best_params=best_params, tuning_history=tuning_history, random_state=RANDOM_STATE ) models.append(model_) best_params_list.append(best_params) tuning_history.append(tuning_history)
TypeError Traceback (most recent call last)
<ipython-input-27-d60ed207f14a> in <module>()
---> 18 random_state=RANDOM_STATE
/opt/conda/lib/python3.6/site-packages/optuna-1.5.0-py3.6.egg/optuna/integration/_lightgbm_tuner/init.py in train(args, **kwargs)
---> 36 auto_booster = LightGBMTuner(args, **kwargs)
38 return auto_booster.get_best_booster()
TypeError: init() got an unexpected keyword argument 'best_params'
masterto check for violations (
isort . --check --diff). I temporarily configured the repository to require that PRs are verified against the latest commit in
masterso they won’t get merged otherwise. Sorry for the additinal work... optuna/optuna#1695
masterat all times. This was because isort was introduced and the
mastercould break quite easily in terms of style violations. However, we noticed it puts a lot of burden on the PR author so I disabled this enforcement again. It’s now back to normal
study = optuna.create_study(direction='maximize')
objective = MyObjective(x_train, x_val, y_train, y_val)
study.optimize(objective, n_trials=n_trials, n_jobs=n_jobs)
Thanks for all your contributions, we’ve just released v2.2.0! Some PRs include the following.
TPESamplerto do relative sampling. This is an experimental feature but please try it out as it’s showing very promising results on early benchmarks.
AllenNLPExecutornow supports pruning to speed up your NLP experiments.
Also, note that starting from this version, Python 3.5 is no longer officially supported.