Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    ina235
    @ina235
    :)
    Hi @sile , How does Optuna pick the optimizer name and lr for every trial?
    coz I used the above code and gave the trials as 5
    I got the below results

    Finished trial#0 resulted in value: 0.45259553097334064. Current best value is 0.45259553097334064 with parameters: {'adam_lr': 0.005634984338480283, 'optimizer': 'Adam'}.

    Finished trial#1 resulted in value: 0.1961270316900221. Current best value is 0.45259553097334064 with parameters: {'adam_lr': 0.005634984338480283, 'optimizer': 'Adam'}.

    Finished trial#2 resulted in value: 0.3304895917309103. Current best value is 0.25259553097334064 with parameters: {'adam_lr': 0.005634984338480283, 'optimizer': 'Adam'}.

    Finished trial#3 resulted in value: 0.33403963344607406. Current best value is 0.25259553097334064 with parameters: {'adam_lr': 0.005634984338480283, 'optimizer': 'Adam'}.

    for first 4 trials the optimizer was just adam and lr was just the same. I actually expected it to choose different lr for each trial and choose one between adam and SGD for each trial
    ina235
    @ina235
    do you have any idea?
    Masaki Kozuki
    @crcrpar
    Hi,
    The log prints out the value of the latest trial and the best of all the reported values so far. So in your case, trial 0 was the best for the first four trials.
    Takeru Ohta
    @sile
    Thanks @crcrpar !
    As he said, Optuna shows only the parameters of the best trial in the log messages. If you want to get the suggested parameters of other trials, you can call study.trials or study.trials_dataframe methods, which return information of all trials tried in an optimization. > @ina235
    ina235
    @ina235
    Oh okay. I get it. Thanks a lot @crcrpar and @sile
    Sorry for the late response guys!!
    ina235
    @ina235
    Hello, I am trying to reduce the training loss before running the model with validation data. That is, the train(trial, model, criterion, train_loader), n_trials=10) function returns training_loss and I am trying to minimize that.
    My doubt is, will the model with the optimal(best) training loss value be used for validation data or only the model with weights, optimizer and lr for 10th trial be the one used for validation data. Just confused as I am not sure what happens internally. If someone can enlighten me in this, it would be great.
        for epoch in range(n_epochs):
                if count >= patience:
                    break
                study = optuna.create_study(direction='minimize')
                study.optimize(lambda trial: train(trial, model, criterion, train_loader), n_trials=10)
                print('Best trial:')
                trial = study.best_trial
                print('  Value: ', trial.value)
                valid_loss, valid_accuracy = test(model, val_loader, criterion)
                print("epoch:", epoch, "validation loss:", valid_loss, "validation accuracy:", valid_accuracy)
                if valid_loss < optimal_loss:
                    optimal_loss = valid_loss
                    count = 0
                    print("saving the model")
                    torch.save(model.state_dict(), 'optimal_model')
                else:
                    count += 1
    
            test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
            model.load_state_dict(torch.load('optimal_model'))
            loss, acc, roc_auc, pr_auc = test(model, test_loader, criterion)
    ina235
    @ina235
    In short, how can I save the model for best trial or loss value in my case?
    Takeru Ohta
    @sile
    Hmm, typically, Optuna is used to optimize the whole training process (see https://github.com/optuna/optuna/blob/master/examples/pytorch_simple.py#L81 as a PyTorch example). Invoking study.optimize() within each epoch seems a bit strange for me.
    ina235
    @ina235
    Okay. The example is clear. Similar to what I need. So while maximizing the accuracy, the loss will be taken care and optimizer will be set in order to maximize the accuracy.
    Kindly correct me if my understanding is wrong
    Takeru Ohta
    @sile
    I think your understanding is correct.
    ina235
    @ina235
    Ok. Thank you :)
    Harutaka Kawamura
    @harupy

    The PR to add a contour plot on the MLflow UI is merged.

    mlflow/mlflow#2225

    Masashi SHIBATA
    @c-bata
    :tada:
    Takeru Ohta
    @sile
    :confetti_ball:
    Harutaka Kawamura
    @harupy
    Thank you, the Optuna team!
    ina235
    @ina235
    Hi, Is there an average number of trials that I minimum need to run to see the best results or it completely depends on the problem?
    Hiroyuki Vincent Yamazaki
    @hvy
    You are correct in that it depends on the problem, i.e. what objective you’re trying to optimize and how. It might be worth noting that certain algorithm such as the TPE sampler (the default sampler) behaves similarly to a random sampler for the first few trials as seen https://github.com/optuna/optuna/blob/master/optuna/samplers/tpe/sampler.py#L131-L133. You can specify the number of these random trials but if you are running in the order of 10^2 or something larger, you shouldn't have to worry.
    ina235
    @ina235
    Oh okay. Thanks a lot :) I get your point
    Thomas PEDOT
    @slamer59
    Happy New Year everyone !
    Does optuna have a time limit per Trail included ?
    We can fix either n_trials or timeout but is there a way to force Failure if over a certain timelimit. Some of my trail are taking an huge amount of time and I consider them as none optimal.

    if not I can use something like this :

    import signal
    
    def signal_handler(signum, frame):
        raise Exception("Timed out!")
    
    signal.signal(signal.SIGALRM, signal_handler)
    signal.alarm(10)   # Ten seconds
    try:
        long_function_call()
    except Exception, msg:
        print "Timed out!"

    raising some Optuna Failure exception

    Masaki Kozuki
    @crcrpar
    @slamer59 Happy new year! As far as I know, Optuna does not support such features. 😢
    Takeru Ohta
    @sile
    Hi, everyone.
    Today, we released Optuna 1.0, the first stable version!
    In addition, a developer blog has started. The first post is about this release and future roadmaps.
    Shuhei Fujiwara
    @sfujiwara
    :tada:
    Takuya Akiba
    @iwiwi
    :tada:
    Hiroyuki Vincent Yamazaki
    @hvy
    Probably related to the release of v1.0 but nice to see PRs from first-time contributors. :slight_smile:
    optuna/optuna#838
    optuna/optuna#839
    optuna/optuna#840
    Takeru Ohta
    @sile
    :+1:
    Adesh Bansode
    @adeshbansode
    error.PNG
    error1.PNG
    how to handle conditional hyperparameter ? it give above error when model to fit.
    Masaki Kozuki
    @crcrpar

    @adeshbansode I think it might be lr.set_params instead of lr.get_params. http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html

    Also, it seems some solvers have constraints on penalty. So, the list of solvers needs to be defined according to the type of penalty.

    Adesh Bansode
    @adeshbansode
    @crcrpar at lr.set_params gives error. and i will check constraints on penalty. Thanks..!!
    Masaki Kozuki
    @crcrpar
    @adeshbansode If your problem is not resolved yet, here’s a snippet that implements what I think you want to do, thanks! https://gist.github.com/crcrpar/d025b837720e08131ed02e7511188bde
    Adesh Bansode
    @adeshbansode
    @crcrpar Thanks a lot..!!
    Masaki Kozuki
    @crcrpar
    My pleasure!
    Adesh Bansode
    @adeshbansode
    @crcrpar can i add all classifier with their hyperparameter to build a efficient model ? which can give better result. To optimize classifier using optuna its possible :
    Masaki Kozuki
    @crcrpar
    @adeshbansode Of course! https://gist.github.com/crcrpar/d025b837720e08131ed02e7511188bde adds classifier as a hyperparameter. I’m sure that SVC has more hyperparameters, but this is a casual example. :)
    Adesh Bansode
    @adeshbansode
    @crcrpar Thanks a lot..!! I will work on it. Actually I am trying to build light version of autoML model.
    Masaki Kozuki
    @crcrpar
    That sounds pretty cool!
    Adesh Bansode
    @adeshbansode
    @crcrpar Can you help me how to increase the speed of model using Optuna like hyperopt give the option Apache Spark or MongoDB ?
    Chris Seymour
    @iiSeymour
    Hello All, is there a reason not to prefer ASHA over median pruning (which is the default)?
    Masaki Kozuki
    @crcrpar
    @adeshbansode I’m not sure I understand your question enough. Do you mean speed up the optimization process? If so, Optuna has some options such as setting larger n_jobs of study.optimize and distributed execution. Hope this helps you a bit.
    Adesh Bansode
    @adeshbansode
    @crcrpar I think it will work for me. I will check it. Thanks .!!