Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    esarvestani
    @esarvestani

    Hi. Thanks for the nice package. I would like to use Optuna to find the points that minimize a function. As a simple example: suppose that we have a function of two variables, f(x,y), and I want to find the value of y that minimizes f for a given x. I implemented it like this:

    import numpy as np
    import optuna
    
    def objective(trial, x):
        y = trial.suggest_uniform('y', -1., 1.)
        return 100.*(y-x**2)**2 + (1-x)**2
    
    def optimal_y(X, n_trials):
        optuna.logging.set_verbosity(optuna.logging.WARNING)
        y_optimal = np.empty(X.shape)
        counter = 0
        for x in X:
            study = optuna.create_study()
            study.optimize(lambda trial: objective(trial, x), n_trials=n_trials)
            best_params = study.best_params
            y_optimal[counter] = np.asarray(list(best_params.values()))
            del study
            counter += 1
    
        return y_optimal
    
    if __name__ == '__main__':
        X = np.linspace(start=-1,stop=1,num=20)
        y_optimal = optimal_y(X, 100)
        print(y_optimal)

    It works fine. But the problem is that this version finds the minimum for each given x in serial. Now, I would like to change the code in such a way that I am able to do optimization for the vector X at one time, I mean for all x values in parallel. Is there any way to do that?

    1 reply
    tomshalini
    @tomshalini
    Hello,
    I am training a model for 600 epochs and I want to prune those trials where accuracy is less than 90% at epoch 100. How can I add that condition? Please advice.
    Currently, I am getting 95% accuracy with 600 epochs, however, my goal is to find hyperparameter values to get a targeted accuracy of 96% and therefore i want to prune all those trials wherever i am getting less than 90% accuracy at epoch 100.
    3 replies
    SM-91
    @SM-91
    Hello everyone,
    I am new to optimization and i need to implement genetic algorithm sampler in Optuna framework for single objective study. I am clueless in initializing the population with individuals up to the population size just with single trial. I want 0 initial fitness value and on the second trial i need to get the objective function values from each individual in population and need to sort them.. Your help would be really appreciable.
    4 replies
    DentonJC
    @DentonJC
    Thanks for such a great library, I'm a huge fan. Are you planning to make the CategoricalDistribution support dynamic value space (https://github.com/optuna/optuna/issues/372)? It could be very useful to me, at least be able to only exclude categories from the existing distribution.
    2 replies
    Rounak Agarwal
    @agarwalrounak
    Hi, I am performing hyperparameter optimization/tuning using Optuna. Before performing the optimization, I have certain data consisting of hyperparameters and the corresponding value of the objective function. Can I use this data to improve the optimization performed by Optuna? Basically, I want to train the algorithm on past data. What would be a good way to go about implementing this? Is study.add_trial to be used for this purpose?
    6 replies
    SurajitTest
    @SurajitTest
    Hi, I was just going through the article :https://towardsdatascience.com/exploring-optuna-a-hyper-parameter-framework-using-logistic-regression-84bd622cd3a5 . With regards to pruning using sklearn classifiers , I am confused and would request help to understand the given below. 1) Is it that the for loop defined by : 'for step in range(100)' is set to iterate for 100 times as we have set 'study.optimize(objective, n_trials=100)'? 2) Is sklearn pruning handled correctly as in the Github repository of optuna , I could only find https://github.com/optuna/optuna/blob/master/examples/pruning/simple.py ,which uses a sklearn classifier that supports 'partial_fit()', but no example to show a sklearn classifier that supports 'fit()'. 3) Is there any guideline as to how to set the number of iterations of the for loop ?
    7 replies
    SurajitTest
    @SurajitTest
    Hi, when I use optuna 2.3.0 and xgboost 1.3.1, nothing runs. I then downgrade to xgboost 1.2.1 and all is good. Is there anything significant that I need to keep in mind when using xgboost 1.3.1 with optuna 2.3.0 ?
    5 replies
    Vinit-source
    @Vinit-source
    Hi. Someone please tell show me how to use FixedTrail? I need to experiment with all possible values like GridSearch. Can it be done in some way on Optuna?
    4 replies
    Vinit-source
    @Vinit-source
    Now if I wish to use values for some parameters from the GridSampler and the values for the remaining parameters from the range specified in the suggest functions, then what can be used?
    2 replies
    Francisco Villaescusa-Navarro
    @franciscovillaescusa

    Hi. First of all thanks a lot for developing optuna, is an amazing software! I'm doing hyperarameter optimization using vanilla model:

    study_name = 'wd_dr_hidden_lr_e3'
    storage    = 'sqlite:///e3_%s.db'%field
    n_trials   = 50
    
    objective = Objective(device, seed, f_maps, f_params, batch_size, splits,
                          arch, min_lr, beta1, beta2, epochs, root_out, field)
    sampler = optuna.samplers.TPESampler(n_startup_trials=10)
    study = optuna.create_study(study_name=study_name, sampler=sampler, storage=storage,
                                load_if_exists=True)
    study.optimize(objective, n_trials)

    I'm using 2 GPUs (run on different terminals) and after a few trials I get this error:

    Traceback (most recent call last):
      File "main_hyperparams.py", line 222, in <module>
        study.optimize(objective, n_trials)
      File "/mnt/home/fvillaescusa/.local/lib/python3.7/site-packages/optuna/study.py", line 315, in optimize
        show_progress_bar=show_progress_bar,
      File "/mnt/home/fvillaescusa/.local/lib/python3.7/site-packages/optuna/_optimize.py", line 65, in _optimize
        progress_bar=progress_bar,
      File "/mnt/home/fvillaescusa/.local/lib/python3.7/site-packages/optuna/_optimize.py", line 156, in _optimize_sequential
        trial = _run_trial(study, func, catch)
      File "/mnt/home/fvillaescusa/.local/lib/python3.7/site-packages/optuna/_optimize.py", line 238, in _run_trial
        study._tell(trial, TrialState.COMPLETE, value)
      File "/mnt/home/fvillaescusa/.local/lib/python3.7/site-packages/optuna/study.py", line 603, in _tell
        self._storage.set_trial_state(trial._trial_id, state)
      File "/mnt/home/fvillaescusa/.local/lib/python3.7/site-packages/optuna/storages/_cached_storage.py", line 200, in set_trial_state
        return self._flush_trial(trial_id)
      File "/mnt/home/fvillaescusa/.local/lib/python3.7/site-packages/optuna/storages/_cached_storage.py", line 404, in _flush_trial
        datetime_complete=updates.datetime_complete,
      File "/mnt/home/fvillaescusa/.local/lib/python3.7/site-packages/optuna/storages/_rdb/storage.py", line 604, in _update_trial
        raise RuntimeError("Cannot change attributes of finished trial.")
    RuntimeError: Cannot change attributes of finished trial.

    I was wondering if I'm doing something wrong here. Thanks for your help!

    24 replies
    Hideaki Imamura
    @HideakiImamura

    Happy New Year! Kicking off with the release of v2.4.0, thanks to everyone who was involved. This is a minor but still a big release. Please check out the highlights and release note at https://github.com/optuna/optuna/releases/tag/v2.4.0 or via the Tweet https://twitter.com/OptunaAutoML/status/1348897690545840135?s=20. In short, it contains

    Python 3.9 support (with the exclusion of integration modules)
    Multi-objective optimization that’s now stable as a first-class citizen
    Sampler that wraps BoTorch for Bayesian optimization. This sampler opens up Optuna for constrained optimization using slack variables, i.e. outcome constraints such as x0 + x1 < y. See https://github.com/optuna/optuna/blob/release-v2.4.0/examples/botorch_simple.py
    Richer and more easily extensible tutorial https://optuna.readthedocs.io/en/v2.4.0/tutorial/index.html

    Hiroyuki Vincent Yamazaki
    @hvy
    🎉
    Sandeep Thapa
    @Pager07
    New to optuna, can anyone please guide me on suggesting int that goes in the power of 2? So like [1,2,4,8]. Many Thanks
    3 replies
    James Y
    @yuanjames
    Does any meet the problem that the call of plot_optimization_history() method does not show the figure up.
    6 replies
    James Y
    @yuanjames
    May I ask is there any CmaEs sampler usage?
    James Y
    @yuanjames

    May I ask is there any CmaEs sampler usage?

    I am wondering CmaES is only used for sampling relative parameters, while random sampler is used to sample independent parameters. Is the relative search space is determined by backend if we follow the demo codes?

    import optuna
    
    
    def objective(trial):
        x = trial.suggest_uniform("x", -1, 1)
        y = trial.suggest_int("y", -1, 1)
        return x ** 2 + y
    
    
    sampler = optuna.samplers.CmaEsSampler()
    study = optuna.create_study(sampler=sampler)
    study.optimize(objective, n_trials=20)
    9 replies
    Manishankar Singh
    @tonysinghmss
    Hi team.
    I am wondering how you implemented conditions and loops into search space. Can you please explain or point towards the right direction ?
    3 replies
    SurajitTest
    @SurajitTest

    Hi Team, I am using a dataset of about 140,000 rows and 300 features (after categorical encoding). I am also using optuna integration for xgboost(using xgb.cv().Firstly, I tried with xgboost 1.3.3 and optuna 2.4.0 - program ran for 18 hours using 32 CPUs and not even a single trial completed. I then ran using xgboost 1.3.1 and optuna 2.4.0 -- program ran for 3 hours using 60 CPUs and not even a single trial completed. I am now trying with xgboost 1.2.1 and optuna 2.3.0 using 60 CPUs - Can anyone help to understand in case of any compatibility issues ?

    The code that I am using is given below:

    def objective(trial): 
        # Define the search space
        param_sp = {
            'base_score'            : 0.5, 
            'booster'               : 'gbtree', 
            'colsample_bytree'      : trial.suggest_categorical('colsample_bytree', [0.7,0.8,0.9,1.0]),
            'learning_rate'         : trial.suggest_categorical('learning_rate',[0.1]),
            'max_depth'             : trial.suggest_categorical('max_depth', [6,8,10]),
            'objective'             : 'binary:logistic', 
            'scale_pos_weight'      : trial.suggest_categorical('scale_pos_weight', [ratio1,ratio2,ratio3,ratio4,1,10,30,50,75,99,100]),        
            'subsample'             : trial.suggest_categorical('subsample', [0.5,0.6,0.7,0.8,0.9,1.0]),        
            'verbosity'             : 1, 
            'tree_method'           :'auto',            
            'predictor'             :'cpu_predictor', 
            'eval_metric'           :'aucpr'
        }
    
        #Add the pruning Call Back
        pruning_callback = optuna.integration.XGBoostPruningCallback(trial, "test-aucpr")
    
        #Perform Native API cross validation
        xgb_cv_results=xgb.cv(param_sp,dtrain,stratified=True,folds=skfolds,metrics='aucpr',num_boost_round=500,early_stopping_rounds=50,as_pandas=True,verbose_eval=False,seed=42,shuffle=True,callbacks=[pruning_callback])    
    
        # Set n_estimators as a trial attribute
        trial.set_user_attr("n_estimators", len(xgb_cv_results))
    
        # Extract the best score.
        best_score = xgb_cv_results["test-aucpr-mean"].values[-1]
        return best_score    
    
    pruner = optuna.pruners.MedianPruner(n_startup_trials=5, n_warmup_steps=20, interval_steps=10)
    study = optuna.create_study(study_name='XGB_Optuna_0.1_Iter1',direction='maximize',sampler=TPESampler(consider_magic_clip=True,seed=42,multivariate=False),pruner=pruner)
    
    # perform the search
    print('\nPerforming Bayesian Hyper Parameter Optimization..')
    study.optimize(objective, n_trials=100,n_jobs=-1)
    7 replies
    James Y
    @yuanjames
    Hi, May I ask a question about TPE, in the original paper, it said the hyperparameter search space is structured by the tree. I see in optuna.TPE, it uses independent samping of GMM for each hyperparameter. Therefore, it is not completely the same as the original right?
    4 replies
    Hiroyuki Vincent Yamazaki
    @hvy
    Thanks for all your contributions, we’ve just released v2.5.0. This is a minor but still a big release. Please check out the highlights and release note at https://github.com/optuna/optuna/releases/tag/v2.5.0 or via the Tweet https://twitter.com/OptunaAutoML/status/1356131644466372610. In short, it contains
    Ask-and-Tell interface to construct trials without objective function callbacks.
    Heartbeat monitoring of trials to automatically fail stale trials.
    Constrained optimization support for NSGA-II, which is a well-known evolutionary algorithm to solve multi-objective optimization.
    Jyot Makadiya
    @jeromepatel
    Hello everyone,
    I am new here, a 3rd year CS undergrad Student. I am thinking about participating in GSOC 2021, I have always been a big fan of Optuna, it has helped me a lot in Kaggle competitions, Thank you for creating such a great framework!! Then I found out that optuna will be there in gsoc 2021, I am so much excited to work and contribute in this great community!! Some guidance will be helpful, such as where to start or how to begin contributing(like some good starting issues)
    Madhu Charan
    @madhucharan
    Hello everyone. My name is Madhu. I am an undergraduate student from India. I recently started to contribute to opensource and found optuna. I would like someone to guide me through some beginner issues as well as some resources to get started with contributing the codebase. Thank you
    Crissman Loomis
    @Crissman
    @jeromepatel @madhucharan Welcome both of you. Please start with the Contribution Welcome issues! https://github.com/optuna/optuna/issues?q=is%3Aopen+is%3Aissue+label%3Acontribution-welcome
    Jyot Makadiya
    @jeromepatel
    Thank you for your quick reply, I will start with some welcome issues then!
    Madhu Charan
    @madhucharan
    Thank you @Crissman :) will start working on it
    SurajitTest
    @SurajitTest

    Hi Team, I am using XGBoost 1.3.3 and Optuna 2.4.0. My dataset has 138k rows and 300 columns (after categorical encoding). I am trying to replicate the example as in - https://github.com/optuna/optuna/blob/master/examples/pruning/xgboost_integration.py (but only for booster='gbtree'). When I run the code , I get the message 'segmentation fault' and the program returns to the $prompt (I am using amazon linux). Can anyone please help to understand as to why am I getting the message 'segmentation fault' ?

    The code that I am using is as given below:

    # Import data into xgb.DMatrix form 
    dtrain = xgb.DMatrix(X_train,label=y_train)
    dtest = xgb.DMatrix(X_test,label=y_test)
    
    # define the search space and the objecive function
    def objective(trial):
        param_sp = {
            'base_score'            : 0.5, 
            'booster'               : 'gbtree', 
            'colsample_bylevel'     : trial.suggest_categorical('colsample_bylevel',[0.7,0.8,0.9]),        
            'colsample_bynode'      : trial.suggest_categorical('colsample_bynode',[0.7,0.8,0.9]),
            'colsample_bytree'      : trial.suggest_categorical('colsample_bytree',[0.7,0.8,0.9]),
            'gamma'                 : trial.suggest_categorical('gamma',[0.0000001,0.000001,0.00001,0.0001,0.001,0.01,0.1,0.3,0.5,0.7,0.9,1,2,3,4,5,6,7,8,9,10]),    
            'learning_rate'         : trial.suggest_categorical('learning_rate',[0.1]),
            'max_delta_step'        : trial.suggest_categorical('max_delta_step', [0,1,2,3,4,5,6,7,8,9,10]),     
            'max_depth'             : trial.suggest_categorical('max_depth', [10]),
            'min_child_weight'      : trial.suggest_categorical('min_child_weight', [1,3,5,7,9,11,13,15,17,19,21]),
            'objective'             : 'binary:logistic', 
            'reg_alpha'             : trial.suggest_categorical('reg_alpha', [0.000000001,0.00000001,0.0000001,0.000001,0.00001,0.0001,0.001,0.01,0.1,1,10,100]),
            'reg_lambda'            : trial.suggest_categorical('reg_lambda', [0.000000001,0.00000001,0.0000001,0.000001,0.00001,0.0001,0.001,0.01,0.1,1,10,100]),
            'scale_pos_weight'      : trial.suggest_categorical('scale_pos_weight', [ratio1,1,10,20,30,40,50,60,70,80,90,100,1000]),        
            'seed'                  : 42, 
            'subsample'             : trial.suggest_categorical('subsample', [0.5,0.6,0.7,0.8,0.9]),        
            'verbosity'             : 1, 
            'tree_method'           :'auto',            
            'predictor'             :'cpu_predictor', 
            'eval_metric'           :'error'
        }
    
        #Add the pruning Call Back
        pruning_callback = optuna.integration.XGBoostPruningCallback(trial, "validation-error")
    
        #Perform validation
        xgb_bst=xgb.train(param_sp,dtrain,num_boost_round=1000,evals=[(dtest, "validation")],early_stopping_rounds=100,verbose_eval=False,callbacks=[pruning_callback])    
    
        # Set n_estimators as a trial attribute
        trial.set_user_attr("n_estimators", xgb_bst.best_ntree_limit)
    
        # Extract the best score.
        preds = xgb_bst.predict(dtest)
        pred_labels = np.rint(preds)
        f1 = metrics.f1_score(y_test, pred_labels)
        return f1
    
    pruner = optuna.pruners.MedianPruner(n_startup_trials=5, n_warmup_steps=20, interval_steps=10)
    study = optuna.create_study(study_name='XGB_Optuna_0.1_max_depth_10_Error_Val_500_trials',direction='minimize',sampler=TPESampler(consider_magic_clip=True,seed=42,multivariate=False),pruner=pruner)
    
    # perform the search
    print('\nPerforming Bayesian Hyper Parameter Optimization..')
    study.optimize(objective, n_trials=500,n_jobs=16)
    1 reply
    Aryan Prasad
    @0x41head
    Hello, just wanted to introduce myself here. My name is Aryan and I am currently doing my undergrad in CS. I have already started contributing towards optuna, to take part in GSOC '21 and I have to say this has been one of the most interesting project I have ever been part of.
    2 replies
    FR8803
    @FR8803
    Hey guys I'm currently trying to optimize the hyperparameters of a deep-q reinforcement learning model implemented with tf agents. It is based on an OpenAI gym environment, let's say for example "Cartpole-v0". So far I haven't found any examples of implementation with Optuna. Do you know of any code examples on github and could you share any ideas on how to approach this problem? Thanks a lot in advance!
    4 replies
    razou
    @razou

    Hello
    I'm trying to visualize the study output in jupyter notebook

    optuna.visualization.plot_optimization_history(study)
    optuna.visualization.plot_slice(study)
    optuna.visualization.plot_contour(study, params=['epochs', 'learning_rate'])

    Nothing happen when I run these commands.

    Has anybody tested to do some visualization in similar environment ?

    5 replies
    Aryan Prasad
    @0x41head
    Are the rdb storage tests from circleCI still relevant or have the become outdated ? Asking since they seem to be removed from the docs.
    2 replies
    Mahmoud Abdelkhalek
    @mhdadk
    Is there a rule of thumb for how to choose the number of epochs per trial?
    2 replies
    Miguel Crispim Romao
    @romanovzky
    Hi all, I have a question. I'm training a regressor with Keras, and my objective is the R2, which is positive semi-definite and my goal is to maximise it. The R2 score is calculated at the end of the training for each HP combination. However, I want to use the MedianPrunner that should be monitoring the val_loss, which is supposed to be minimised. How can I be sure that the prunner is minimising the val_loss while still having a maximise optimisation step?
    3 replies
    Francisco Villaescusa-Navarro
    @franciscovillaescusa
    Hi. When using optuna in parallel (e.g. 4 GPUs running on different terminals with the same common database), how does n_trials and n_startup_trials behave? 1) Will the 4 GPUs run 50 trials in total or 200 trials in total? 2) Will the random sampling stop after n_startup_trials in total or after each GPU has carried out n_startup_trials? Thanks!
    6 replies
    2403hwaseer
    @2403hwaseer
    Hi! I am Harman Waseer from IIT Roorkee and I am looking forward to participating in GSoC' 21. I have read about the projects and I am interested in working on the Web Dashboard project. Can someone guide me on how to get started? Thanks!
    4 replies
    viiids
    @viiids
    Hi, I have a question around customizing acquisition functions. Is it possible to do this in Optuna, essentially I want to continue using SingleTaskGP or whatever model Optuna uses along with the acquisition function, however I want to sample many point and then run another pass ranker to sort them using an extra function. This final list is what I want to sample values from. In order to audit the information, I have also created a ticket: optuna/optuna#2339. Feel free to reply to that
    Jyot Makadiya
    @jeromepatel
    Hello,
    I am going to apply for Optuna in GSOC 2021. as @Crissman suggested, I have submitted my first PR #2346, thank you @toshihikoyanase and Kento Nozawa for helping me with that. I am interested in working on a sampling of samplers projects. Related to that I am currently working on the issue: optuna/optuna/#2233.
    With reference to that issue, I had one question that if I modify the sampler python file for samplers eg., _cmaes.py, then how can I test my local changes when I implement a new function(eg after_trial), I think I am aware of tests/samplers_test, but I am not sure how can I use that? Any suggestions and guidance are welcome. Thank you!!
    6 replies
    Dário Passos
    @dario-passos
    Hi everyone. I'm starting using optuna in a project related to the use of Conv. Neural Nets in chemometrics and I was wondering if there is any video or tutorial that shows how to deploy/use the new optuna-dashboard. I'm used to run my experiments in a Jupyter notebook and so far I haven't figured out how to launch dashboard. Thanks to all the Optuna community for a great piece of software.
    5 replies
    Peter Cotton
    @microprediction
    Hi all. I've been using Optuna and also benchmarking it. I'm trying to figure out how to choose a good collection of option choices and tweaks, so I can try them all against my problems. My current effort is at https://github.com/microprediction/humpday/blob/main/humpday/optimizers/optunacube.py
    3 replies
    By the way I also wrote a small package to compare optimizer performance, albeit in a somewhat limited way focussed on my domain. There is an article at https://www.microprediction.com/blog/humpday and feedback is welcome. As I don't claim to be an optuna expert I suspect some tweaking would help.
    That said, optuna is doing well.
    braham-snyder
    @braham-snyder

    Hi -- when running multiple processes in distributed mode on a single machine, how should I choose n_jobs?

    My guess is -1 or maybe 1, but I'm not even certain of that.

    2 replies
    braham-snyder
    @braham-snyder
    I should clarify my objective is CPU-bound and GIL-locked -- n_jobs=-1 w/o distributed storage uses only 3/16 cores
    Crystal Humphries
    @CrystalHumphries
    is there a way to tweak optuna so that one can vary the set of values tested during each acquisition function? That is, instead of testing one new set of parameters at a time, i would prefer to test >=3 at once?
    3 replies
    Hamza Ali
    @ryzbaka
    Hi everyone, my name's Hamza (https://github.com/ryzbaka). I'm experienced in full-stack web development, data engineering, and computational statistics. I'm interested in working on the Optuna web dashboard for GSOC 2021.
    3 replies
    I'd like to know more about the process of getting started with the Optuna codebase. Based on the document here, should I get started with figuring out how to port the dashboard to TS or is there something else that I'm missing?
    Luca Ponzoni
    @luponzo86
    Hi, this may be a stupid question. I'm trying to use multi-objective optimization in Optuna 2.5.0 and I'm not sure how to turn off pruning to avoid the error: NotImplementedError("Trial.report is not supported for multi-objective optimization.")
    7 replies
    Dmitriy Selivanov
    @dselivanov
    Hi Folks. I've tried to google, but could not find solution. I have a proxy loss function which is loss = weight_1 * loss_component_1 + weight_2 * loss_component_2 + ... . And constraint sum(weight_i) = 1; all(weight_i) > 0 & all(weight_i) < 1. I want to find optimal combination of weight_i. So essentially I need to sample parameters from multinoulli distribution. Of course I can sample parameters from uniform and then normalize them, but I don't feel this is the right way.
    7 replies
    esarvestani
    @esarvestani

    Hi everyone. I am going to use Optuna for hyperparameter optimization of an iterative process in which the number of samples increases by iterations. I start Optuna from scratch for iteration 0, but for the next iterations I use accumulated trials from all previous iterations. With this warm-up scheme after some iterations the search space becomes so small and it concentrates on a very small region in the parameter space. Now, I need to give it the chance to look into other regions in the parameter space after a few iterations. One idea that I have is to force it to forget the trials from long time ago, for example when it starts iteration 5 I want to ignore the trials from iteration 0 and 1 and so on. To do so I use this piece of code to manually change the state of those trials from 'COMPLETE' to 'FAIL'; with this when the 'study' is loaded only the trials with state='COMPLETE' are taken into account.

    def makefailSqliteTable(storage):
        try:
            sqliteConnection = sqlite3.connect(storage)
            cursor = sqliteConnection.cursor()
            sql_update_query = """Update trials set state = 'FAIL' """
            cursor.execute(sql_update_query)
            sqliteConnection.commit()
            cursor.close()
        except sqlite3.Error as error:
            print("Failed to update sqlite table", error)
        finally:
            if (sqliteConnection):
                sqliteConnection.close()
                print("The SQLite connection is closed")
    
    def updateSqliteTable(storage, N):
        try:
            sqliteConnection = sqlite3.connect(storage)
            cursor = sqliteConnection.cursor()
            df = pd.read_sql_query("SELECT * from trials", sqliteConnection)
            sql_update_query = """Update trials set state = 'COMPLETE' where number > """ + str(len(df)-N)
            cursor.execute(sql_update_query)
            sqliteConnection.commit()
            cursor.close()
        except sqlite3.Error as error:
            print("Failed to update sqlite table", error)
        finally:
            if (sqliteConnection):
                sqliteConnection.close()
                print("The SQLite connection is closed")

    I would like to know whether this procedure does the thing that I want. I mean, does it really forget the history from long time ago?