Optuna Versions Save

A hyperparameter optimization framework

v3.6.1

1 month ago

This is the release note of v3.6.1.

Bug Fixes

  • [Backport] Fix Wilcoxon pruner bug when best_trial has no intermediate value #5370
  • [Backport] Address issue#5358 (#5371)
  • [Backport] Fix average_is_best implementation in WilcoxonPruner (#5373)

Other

  • Bump up version number to v3.6.1 (#5372)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@HideakiImamura, @eukaryo, @nabenabe0928

v3.6.0

1 month ago

This is the release note of v3.6.0.

Highlights

Optuna 3.6 newly supports the following new features. See our release blog for more detailed information.

  • Wilcoxon Pruner: New Pruner Based on Wilcoxon Signed-Rank Test
  • Lightweight Gaussian Process (GP)-Based Sampler
  • Speeding up Importance Evaluation with PED-ANOVA
  • Stricter Verification Logic for FrozenTrial
  • Refactoring the Optuna Dashboard
  • Migration to Optuna Integration

Breaking Changes

  • Implement optuna.terminator using optuna._gp (#5241)

These migration-related PRs do not break the backward compatibility as long as optuna-integration v3.6.0 or later is installed in your environment.

New Features

  • Backport the change of the timeline plot in Optuna Dashboard (#5168)
  • Wilcoxon pruner (#5181)
  • Add GPSampler (#5185)
  • Add a super quick f-ANOVA algorithm named PED-ANOVA (#5212)

Enhancements

  • Add formats.sh based on optuna/master (https://github.com/optuna/optuna-integration/pull/75)
  • Use vectorization for categorical distance (#5147)
  • Unify implementation of fast non-dominated sort (#5160)
  • Raise TypeError if params is not a dict in enqueue_trial (#5164, thanks @adjeiv!)
  • Upgrade FrozenTrial._validate() (#5211)
  • Import SQLAlchemy lazily (#5215)
  • Add UCB for optuna._gp (#5224)
  • Enhance performance of GPSampler (#5274)
  • Fix inconsistencies between terminator and its visualization (#5276, thanks @SimonPop!)
  • Enhance GPSampler performance other than introducing local search (#5279)

Bug Fixes

Documentation

  • Remove study optimize from CLI tutorial page (#5152)
  • Clarify the GridSampler with ask-and-tell interface (#5153)
  • Clean-up faq.rst (#5170)
  • Make Methods section hidden from Artifact Docs (#5188)
  • Enhance README (#5189)
  • Add a new section explaing how to customize figures (#5194)
  • Replace legacy plotly.graph_objs with plotly.graph_objects (#5223)
  • Add a note section to explain that reseed affects reproducibility (#5233)
  • Update links to papers (#5235)
  • adding link for module's example to documetation for the optuna.terminator module (#5243, thanks @HarshitNagpal29!)
  • Replace the old example directory (#5244)
  • Add Optuna Dashboard section to docs (#5250, thanks @porink0424!)
  • Add a safety guard to Wilcoxon pruner, and modify the docstring (#5256)
  • Replace LightGBM with PyTorch-based example to remove lightgbm dependency in visualization tutorial (#5257)
  • Remove unnecessary comment in Specify Hyperparameters Manually tutorial page (#5258)
  • Add a tutorial of Wilcoxon pruner (#5266)
  • Clarify that pruners module does not support multi-objective optimization (#5270)
  • Minor fixes (#5275)
  • Add a guide to PED-ANOVA for n_trials>10000 (#5310)
  • Minor fixes of docs and code comments for PedAnovaImportanceEvaluator (#5312)
  • Fix doc for WilcoxonPruner (#5313)
  • Fix doc example in WilcoxonPruner (#5315)

Examples

Tests

  • Unify the implementation of _create_frozen_trial() under testing module (#5157)
  • Remove the Python version constraint for PyTorch (#5278)

Code Fixes

Continuous Integration

Other

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@Alnusjaponica, @DanielAvdar, @HarshitNagpal29, @HideakiImamura, @SimonPop, @adjeiv, @buruzaemon, @c-bata, @contramundum53, @dheemantha-bhat, @eukaryo, @gen740, @hrntsm, @knshnb, @nabenabe0928, @not522, @nzw0301, @porink0424, @ryota717, @shahpratham, @toshihikoyanase, @y0z

v3.5.0

5 months ago

This is the release note of v3.5.0.

Highlights

This is a maintenance release with various bug fixes and improvements to the documentation and more.

Breaking Changes

New Features

  • Support constraints in plot contour (#4975, thanks @y-kamiya!)
  • Support infeasible coloring for plot_timeline (#5014)
  • Support constant_liar in multi-objective TPESampler (#5021)
  • Add optuna study-names cli (#5029)
  • Use ExpectedHypervolumeImprovement candidates function for BotorchSampler (#5065, thanks @adjeiv!)
  • Fix logei_candidates_func in botorch.py (#5094, thanks @sousu4!)
  • Report CV scores from within OptunaSearchCV (#5098, thanks @adjeiv!)

Enhancements

  • Support constant_liar in multi-objective TPESampler (#5021)
  • Make positional args to kwargs in suggest_int (#5044)
  • Ensure n_below is never negative in TPESampler (#5074, thanks @p1kit!)
  • Improve visibility of infeasible trials in plot_contour (#5107)

Bug Fixes

  • Fix random number generator of NSGAIIChildGenerationStrategy (#5003)
  • Return trials for above in MO split when n_below=0 (#5079)
  • Enable loading of read-only files (#5103, thanks @Guillaume227!)
  • Fix logpdf for scaled truncnorm (#5110)
  • Fix the bug of matplotlib's plot_rank function (#5133)

Documentation

  • Add the table of dependencies in each integration module (#5005)
  • Enhance the documentation of LightGBM tuner and separate train() from __init__.py (#5010)
  • Update link to reference (#5064)
  • Update the FAQ on reproducible optimization results to remove note on HyperbandPruner (#5075, thanks @felix-cw!)
  • Remove MOTPESampler from index.rst file (#5084, thanks @Ashhar-24!)
  • Add a note about the deprecation of MOTPESampler to the doc (#5086)
  • Add the TPE tutorial paper to the doc-string (#5096)
  • Update README.md to fix the installation and integration (#5126)
  • Clarify that Recommended budgets include n_startup_trials (#5137)

Examples

Tests

Code Fixes

  • Implement NSGA-III elite population selection strategy (#5027)
  • Fix import path of PyTorchLightning (#5028)
  • Fix Any with float in _TreeNode.children (#5040, thanks @aanghelidi!)
  • Fix future annotation in typing.py (#5054, thanks @jot-s-bindra!)
  • Add future annotations to callback and terminator files inside terminator folder (#5055, thanks @jot-s-bindra!)
  • Fix future annotations to edf python file (#5056, thanks @Vaibhav101203!)
  • Fix future annotations in _hypervolume_history.py (#5057, thanks @Vaibhav101203!)
  • Reduce the warning in tests/storages_tests/test_heartbeat.py (#5066, thanks @sousu4!)
  • Fix future annotation to frozen.py (#5080, thanks @Vaibhav101203!)
  • Fix annotation for dataframe.py (#5081, thanks @Vaibhav101203!)
  • Fix future annotation (#5083, thanks @Vaibhav101203!)
  • Fix type annotation (#5105)
  • Fix mypy error in CI (#5106)
  • Isolate the fast.ai module (#5120, thanks @sousu4!)
  • Clean up workflow file (#5122)

Continuous Integration

  • Run test_tensorflow in Python 3.11 (https://github.com/optuna/optuna-integration/pull/46)
  • Exclude mypy checks for chainer (https://github.com/optuna/optuna-integration/pull/48)
  • Support Python 3.12 on tests for core modules (#5018)
  • Fix the issue where formats.sh does not handle tutorial/ (#5023, thanks @sousu4!)
  • Skip slow integration tests (#5033)
  • Install PyTorch for CPU on CIs (#5042)
  • Remove unused type: ignore (#5047)
  • Reduce tests-mpi to the oldest and latest Python versions (#5067)
  • Add workflow matrices for the tests to reduce GitHub check runtime (#5093)
  • Remove the skip of Python 3.11 in tests-mpi (#5100)
  • Downgrade kaleido to 0.1.0post1 for fixing Windows CI (#5101)
  • Rename should-skip to test-trigger-type for more clarity (#5134)
  • Pin the version of PyQt6-Qt6 (#5135)
  • Revert Pin the version of PyQt6-Qt6 (#5140)

Other

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@Alnusjaponica, @Ashhar-24, @Guillaume227, @HideakiImamura, @JustinGoheen, @Vaibhav101203, @aanghelidi, @adjeiv, @c-bata, @contramundum53, @eukaryo, @felix-cw, @gen740, @jot-s-bindra, @keisuke-umezawa, @knshnb, @nabenabe0928, @not522, @nzw0301, @p1kit, @sousu4, @toshihikoyanase, @y-kamiya

v3.4.0

6 months ago

This is the release note of v3.4.0.

Highlights

Optuna 3.4 newly supports the following new features. See our release blog for more detailed information.

  • Preferential Optimization (Optuna Dashboard)
  • Optuna Artifact
  • Jupyter Lab Extension
  • VS Code Extension
  • User-defined Distance for Categorical Parameters in TPE
  • Constrained Optimization Support for Visualization Functions
  • User-Defined Plotly’s Figure Support (Optuna Dashboard)
  • 3D Model Viewer Support (Optuna Dashboard)

Breaking Changes

  • Remove deprecated arguments with regard to LightGBM>=4.0 (#4844)
  • Deprecate SkoptSampler (#4913)

New Features

  • Support constraints for intermediate values plot (#4851, thanks @adjeiv!)
  • Display all objectives on hyperparameter importances plot (#4871)
  • Implement get_all_study_names() (#4898)
  • Support constraints plot_rank (#4899, thanks @ryota717!)
  • Support Study Artifacts (#4905)
  • Support specifying distance between categorical choices in TPESampler (#4926)
  • Add metric_names getter to study (#4930)
  • Add artifact middleware for exponential backoff retries (#4956)
  • Add GCSArtifactStore (#4967, thanks @semiexp!)
  • Add BestValueStagnationEvaluator (#4974, thanks @smygw72!)
  • Allow user-defined objective names in hyperparameter importance plots (#4986)

Enhancements

  • CHG constrained param displayed in #cccccc (#4877, thanks @louis-she!)
  • Faster implementation of fANOVA (#4897)
  • Support constraint in plot slice (#4906, thanks @hrntsm!)
  • Add mimetype input (#4910, thanks @hrntsm!)
  • Show all ticks in _parallel_coordinate.py when log scale (#4911)
  • Speed up multi-objective TPE (#5017)

Bug Fixes

  • Fix numpy indexing bugs and named tuple comparing (#4874, thanks @ryota717!)
  • Fix fail_stale_trials with race condition (#4886)
  • Fix alias handler (#4887)
  • Add lazy random state and use it in RandomSampler (#4970, thanks @shu65!)
  • Fix TensorBoard error on categorical choices of mixed types (#4973, thanks @ciffelia!)
  • Use lazy random state in samplers (#4976, thanks @shu65!)
  • Fix an error that does not consider min_child_samples (#5007)
  • Fix BruteForceSampler in parallel optimization (#5022)

Documentation

  • Fix typo in _filesystem.py (#4909)
  • Mention a pruner instance is not stored in a storage in resuming tutorial (#4927)
  • Add introduction of optuna-fast-fanova in documents (#4943)
  • Add artifact tutorial (#4954)
  • Fix an example code in Boto3ArtifactStore's docstring (#4957)
  • Add tutorial for JournalStorage (#4980, thanks @semiexp!)
  • Fix document regarding ArtifactNotFound (#4982, thanks @smygw72!)
  • Add the workaround for duplicated samples to FAQ (#5006)

Examples

Tests

  • Reduce n_trials in test_combination_of_different_distributions_objective (#4950)
  • Replaces California housing dataset with iris dataset (#4953)
  • Fix numpy duplication warning (#4978, thanks @torotoki!)
  • Make test order deterministic for pytest-xdist (#4999)

Code Fixes

  • Move shap (https://github.com/optuna/optuna-integration/pull/32)
  • Remove shap (#4791)
  • Use isinstance instead of if type() is ... (#4896)
  • Make cmaes dependency optional (#4901)
  • Call internal sampler's before_trial (#4914)
  • Refactor _grid.py (#4918)
  • Fix the checks-integration errors on LightGBMTuner (#4923)
  • Replace deprecated botorch method to remove warning (#4940)
  • Fix type annotation (#4941)
  • Add _split_trials instead of _get_observation_pairs and _split_observation_pairs (#4947)
  • Use __future__.annotations in optuna/visualization/_optimization_history.py (#4964, thanks @YuigaWada!)
  • Fix #4508 for optuna/visualization/_hypervolume_history.py (#4965, thanks @RuTiO2le!)
  • Use future annotation in optuna/_convert_positional_args.py (#4966, thanks @hamster-86!)
  • Fix type annotation of SQLAlchemy (#4968)
  • Use collections.abc in optuna/visualization/_edf.py (#4969, thanks @g-tamaki!)
  • Use collections.abc in plot pareto front (#4971)
  • Remove experimental_func from metric_names property (#4983, thanks @semiexp!)
  • Add __future__.annotations to progress_bar.py (#4992)
  • Fix annotations in optuna/optuna/visualization/matplotlib/_optimization_history.py (#5015, thanks @sousu4!)

Continuous Integration

  • Fix checks integration (#4869)
  • Remove fakeredis version constraint (#4873)
  • Support asv 0.6.0 (#4882)
  • Fix speed-benchmarks CI (#4903)
  • Fix Tests (MPI) CI (#4904)
  • Fix xgboost pruning callback (#4921)
  • Enhance speed benchmark (#4981, thanks @g-tamaki!)
  • Drop Python 3.7 on tests-mpi (#4998)
  • Remove Python 3.7 from the development docker image build (#5009)
  • Use CPU version of PyTorch in Docker image (#5019)

Other

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@Alnusjaponica, @HideakiImamura, @RuTiO2le, @YuigaWada, @adjeiv, @c-bata, @ciffelia, @contramundum53, @cross32768, @eukaryo, @g-tamaki, @g-votte, @gen740, @hamster-86, @hrntsm, @hvy, @keisuke-umezawa, @knshnb, @lucasmrdt, @louis-she, @moririn2528, @nabenabe0928, @not522, @nzw0301, @ryota717, @semiexp, @shu65, @smygw72, @sousu4, @torotoki, @toshihikoyanase, @xadrianzetx

v3.3.0

9 months ago

This is the release note of v3.3.0.

Highlights

CMA-ES with Learning Rate Adaptation

A new variant of CMA-ES has been added. By setting the lr_adapt argument to True in CmaEsSampler, you can utilize it. For multimodal and/or noisy problems, adapting the learning rate can help avoid getting trapped in local optima. For more details, please refer to #4817. We want to thank @nomuramasahir0, one of the authors of LRA-CMA-ES, for his great work and the development of cmaes library.

256118903-6796d0c4-3278-4d99-bdb2-00b6fe0fa13b

Hypervolume History Plot for Multiobjective Optimization

In multiobjective optimization, the history of hypervolume is commonly used as an indicator of performance. Optuna now supports this feature in the visualization module. Thanks to @y0z for your great work!

246094447-f17d5961-216a-44b3-b9ce-715c105445a7

Constrained Optimization Support for Visualization Functions

Plotly matplotlib
constrained-optimization-history-plot (1) 254270811-e85c3c5e-44e5-4a04-ba8a-f6ea2c53611f (1)

Some samplers support constrained optimization, however, many other features cannot handle it. We are continuously enhancing support for constraints. In this release, plot_optimization_history starts to consider constraint violations. Thanks to @hrntsm for your great work!

import optuna

def objective(trial):
    x = trial.suggest_float("x", -15, 30)
    y = trial.suggest_float("y", -15, 30)
    v0 = 4 * x**2 + 4 * y**2
    trial.set_user_attr("constraint", [1000 - v0])
    return v0

def constraints_func(trial):
    return trial.user_attrs["constraint"]

sampler = optuna.samplers.TPESampler(constraints_func=constraints_func)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
fig = optuna.visualization.plot_optimization_history(study)
fig.show()

Streamlit Integration for Human-in-the-loop Optimization

streamlit_integration

Optuna Dashboard v0.11.0 provides the tight integration with Streamlit framework. By using this feature, you can create your own application for human-in-the-loop optimization. Please check out the documentation and the example for details.

Breaking Changes

New Features

  • Add logei_candidate_func and make it default when available (#4667)
  • Support JournalFileStorage and JournalRedisStorage on CLI (#4696)
  • Implement hypervolume history plot for matplotlib backend (#4748, thanks @y0z!)
  • Add cv_results_ to OptunaSearchCV (#4751, thanks @jckkvs!)
  • Add optuna.integration.botorch.qnei_candidates_func (#4753, thanks @kstoneriv3!)
  • Add hypervolume history plot for plotly backend (#4757, thanks @y0z!)
  • Add FileSystemArtifactStore (#4763)
  • Sort params on fetch (#4775)
  • Add constraints support to _optimization_history_plot (#4793, thanks @hrntsm!)
  • Bump up LightGBM version to v4.0.0 (#4810)
  • Add constraints support to matplotlib._optimization_history_plot (#4816, thanks @hrntsm!)
  • Introduce CMA-ES with Learning Rate Adaptation (#4817)
  • Add upload_artifact api (#4823)
  • Add before_trial (#4825)
  • Add Boto3ArtifactStore (#4840)
  • Display best objective value in contour plot for a given param pair, not the value from the most recent trial (#4848)

Enhancements

  • Speed up logpdf in _truncnorm.py (#4712)
  • Speed up erf (#4713)
  • Speed up get_all_trials in InMemoryStorage (#4716)
  • Add a warning for a progress bar not being displayed #4679 (#4728, thanks @rishabsinghh!)
  • Make BruteForceSampler consider failed trials (#4747)
  • Use shallow copy in _get_latest_trial (#4774)
  • Speed up plot_hypervolume_history (#4776)

Bug Fixes

  • Solve issue #4557 - error_score (#4642, thanks @jckkvs!)
  • Fix BruteForceSampler for pruned trials (#4720)
  • Fix plot_slice bug when some of the choices are numeric (#4724)
  • Make LightGBMTuner reproducible (#4795)

Installation

Documentation

  • Remove jquery-extension (#4691)
  • Add FAQ on combinatorial search space (#4723)
  • Fix docs (#4732)
  • Add plot_rank and plot_timeline plots to visualization tutorial (#4735)
  • Fix typos found in integration/sklearn.py (#4745)
  • Remove study.n_objectives from document (#4796)
  • Add lower version constraint for sphinx_rtd_theme (#4853)
  • Artifact docs (#4855)

Examples

Tests

  • Remove unnecessary pytestmark (https://github.com/optuna/optuna-integration/pull/29)
  • Add GridSampler test for failed trials (#4721)
  • Follow up PR #4642 by adding a unit test to confirm OptunaSearchCV behavior (#4758)
  • Fix test_log_gass_mass with SciPy 1.11.0 (#4766)
  • Fix Pytorch lightning unit test (#4780)
  • Remove skopt (#4792)
  • Rename test directory (#4839)

Code Fixes

  • Simplify the type annotations in benchmarks (#4703, thanks @caprest!)
  • Unify sampling implementation in TPESampler (#4717)
  • Get values after _get_observation_pairs (#4742)
  • Remove unnecessary period (#4746)
  • Handle deprecated argument early_stopping_rounds (#4752)
  • Separate dominate function from _fast_non_dominated_sort() (#4759)
  • Separate after_trial strategy (#4760)
  • Remove unused attributes in TPESampler (#4769)
  • Remove pkg_resources (#4770)
  • Use trials as argument of _calculate_weights_below_for_multi_objective (#4773)
  • Fix type annotation (#4797, thanks @taniokay!)
  • Follow up separation of after trial strategy (#4803)
  • Loose coupling nsgaii child generation (#4806)
  • Remove _study_id parameter from Trial class (#4811, thanks @adjeiv!)
  • Loose coupling nsgaii elite population selection (#4821)
  • Fix checks integration (#4826)
  • Remove OrderedDict (#4838, thanks @taniokay!)
  • Fix typo (#4842, thanks @wouterzwerink!)
  • Followup child generation strategy (#4856)
  • Remove samplers._search_space.IntersectionSearchSpace (#4857)
  • Add experimental decorators to artifacts functionalities (#4858)

Continuous Integration

Other

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@Alnusjaponica, @HideakiImamura, @adjeiv, @c-bata, @caprest, @contramundum53, @cross32768, @eukaryo, @gen740, @hrntsm, @jckkvs, @knshnb, @kstoneriv3, @nomuramasahir0, @not522, @nzw0301, @rishabsinghh, @taniokay, @toshihikoyanase, @wouterzwerink, @xadrianzetx, @y0z

v3.2.0

11 months ago

This is the release note of v3.2.0.

Highlights

Human-in-the-loop optimization

With the latest release, we have incorporated support for human-in-the-loop optimization. It enables an interactive optimization process between users and the optimization algorithm. As a result, it opens up new opportunities for the application of Optuna in tuning Generative AI. For further details, please check out our human-in-the-loop optimization tutorial.

human-in-the-loop-optimization

Overview of human-in-the-loop optimization. Generated images and sounds are displayed on Optuna Dashboard, and users can directly evaluate them there.

Automatic optimization terminator(Optuna Terminator)

Optuna Terminator is a new feature that quantitatively estimates room for optimization and automatically stops the optimization process. It is designed to alleviate the burden of figuring out an appropriate value for the number of trials (n_trials), or unnecessarily consuming computational resources by indefinitely running the optimization loop. See #4398 and optuna-examples#190.

b5b752f2-5d2a-410b-a756-53f3d24acd82

Transition of estimated room for improvement. It steadily decreases towards the level of cross-validation errors.

New sampling algorithms

NSGA-III for many-objective optimization

We've introduced the NSGAIIISampler as a new multi-objective optimization sampler. It implements NSGA-III, which is an extended variant of NSGA-II, designed to efficiently optimize even when the dimensionality of the objective values is large (especially when it's four or more). NSGA-II had an issue where the search would become biased towards specific regions when the dimensionality of the objective values exceeded four. In NSGA-III, the algorithm is designed to distribute the points more uniformly. This feature was introduced by #4436.

219599007-8dc7a435-10e8-45cd-8b95-2b386b4642d5

Objective value space for multi-objective optimization (minimization problem). Red points represent Pareto solutions found by NSGA-II. Blue points represent those found by NSGA-III. NSGA-II shows a tendency for points to concentrate towards each axis (corresponding to the ends of the Pareto Front). On the other hand, NSGA-III displays a wider distribution across the Pareto Front.

BI-population CMA-ES

Continuing from v3.1, significant improvements have been made to the CMA-ES Sampler. As a new feature, we've added the BI-population CMA-ES algorithm, a kind of restart strategy that mitigates the problem of falling into local optima. Whether the IPOP CMA-ES, which we've been providing so far, or the new BI-population CMA-ES is better depends on the problems. If you're struggling with local optima, please try BI-population CMA-ES as well. For more details, please see #4464.

221167904-809a1a17-7248-4f81-84fc-396d783e6548

New visualization functions

Timeline plot for trial life cycle

The timeline plot visualizes the progress (status, start and end times) of each trial. In this plot, the horizontal axis represents time, and trials are plotted in the vertical direction. Each trial is represented as a horizontal bar, drawn from the start to the end of the trial. With this plot, you can quickly get an understanding of the overall progress of the optimization experiment, such as whether parallel optimization is progressing properly or if there are any trials taking an unusually long time.

Similar to other plot functions, all you need to do is pass the study object to plot_timeline. For more details, please refer to #4470 and #4538. 221496175-3f1b286a-ebdc-48d3-9cd7-2a01284e415a

Rank plot to understand input-output relationship

A new visualization feature, plot_rank, has been introduced. This plot provides valuable insights into landscapes of objective functions, i.e., relationship between parameters and objective values. In this plot, the vertical and horizontal axes represent the parameter values, and each point represents a single trial. The points are colored according to their ranks.

Similar to other plot functions, all you need to do is pass the study object to plot_rank. For more details, please refer to #4427 and #4541.

blog-rank-plot

Isolating integration modules

We have separated Optuna's integration module into a different package called optuna-integration. Maintaining many integrations within the Optuna package was becoming costly. By separating the integration module, we aim to improve the development speed of both Optuna itself and its integration module. As of the release of v3.2, we have migrated six integration modules: allennlp, catalyst, chainer, keras, skorch, and tensorflow (excepting for the TensorBoard integration). To use integration module, pip install optuna-integration will be necessary. See #4484.

Starting support for Mac & Windows

We have started supporting Optuna on Mac and Windows. While many features already worked in previous versions, we have fixed issues that arose in certain modules, such as Storage. See #4457 and #4458.

Breaking Changes

New Features

  • Show custom objective names for multi-objective optimization (#4383)
  • Support DDP in PyTorch-Lightning (#4384)
  • Implement the evaluator of regret bounds and its GP backend for Optuna Terminator 🤖 (#4401)
  • Implement the termination logic and APIs of Optuna Terminator 🤖 (#4405)
  • Add rank plot (#4427)
  • Implement NSGA-III (#4436)
  • Add BIPOP-CMA-ES support in CmaEsSampler (#4464)
  • Add timeline plot with plotly as backend (#4470)
  • Move optuna.samplers._search_space.intersection.py to optuna.search_space.intersection.py (#4505)
  • Add timeline plot with matplotlib as backend (#4538)
  • Add rank plot matplotlib version (#4541)
  • Support batched sampling with BoTorch (#4591, thanks @kstoneriv3!)
  • Add plot_terminator_improvement as visualization of optuna.terminator (#4609)
  • Add import for public API of optuna.terminator to optuna/terminator/__init__.py (#4669)
  • Add matplotlib version of plot_terminator_improvement (#4701)

Enhancements

  • Import cmaes package lazily (#4394)
  • Make BruteForceSampler stateless (#4408)
  • Sort studies by study_id (#4414)
  • Add index study_id column on trials table (#4449, thanks @Ilevk!)
  • Cache all trials in Study with delayed relative sampling (#4468)
  • Avoid error at import time for optuna.terminator.improvement.gp.botorch (#4483)
  • Avoid standardizing Yvar in _BoTorchGaussianProcess (#4488)
  • Change the noise value in _BoTorchGaussianProcess to suppress warning messages (#4510)
  • Change the argument of intersection_search_space from study to trials (#4514)
  • Improve deprecated messages in the old suggest functions (#4562)
  • Add support for distributed>=2023.3.2 (#4589, thanks @jrbourbeau!)
  • Fix plot_rank marker lines (#4602)
  • Sync owned trials when calling study.ask and study.get_trials (#4631)
  • Ensure that the plotly version of timeline plot draws a legend even if all TrialStates are the same (#4635)

Bug Fixes

  • Fix botorch dependency (#4368)
  • Mitigate a blocking issue while running migrations with SQLAlchemy 2.0 (#4386)
  • Fix colorlog compatibility problem (#4406)
  • Validate length of values in add_trial (#4416)
  • Fix RDBStorage.get_best_trial when there are infs (#4422)
  • Fix bug of CMA-ES with margin on RDBStorage or JournalStorage (#4434)
  • Fix CMA-ES Sampler (#4443)
  • Fix param_mask for multivariate TPE with constant_liar (#4462)
  • Make QMCSampler samplers reproducible with seed=0 (#4480)
  • Fix noise becoming NaN for the terminator module (#4512)
  • Fix metric_names on _log_completed_trial() function (#4594)
  • Fix ImportError for botorch<=0.4.0 (#4626)
  • Fix index of n_retries += 1 in RDBStorage (#4658)
  • Fix CMA-ES with margin bug (#4661)
  • Fix a logic for invalidating the cache in CachedStorage (#4670)
  • Fix #4697 ValueError: Rank 0 node expects an optuna.trial.Trial instance as the trial argument (#4698, thanks @keisukefukuda!)
  • Fix a bug reported in issue #4699 (#4700)
  • Add tests for plot_terminator_improvement and fix some bugs (#4702)

Installation

Documentation

  • Create the document and run the test to create document in each PR (https://github.com/optuna/optuna-integration/pull/2)
  • Fix Keras docs (https://github.com/optuna/optuna-integration/pull/12)
  • Add links of documents (https://github.com/optuna/optuna-integration/pull/17)
  • Load sphinxcontrib.jquery explicitly (https://github.com/optuna/optuna-integration/pull/18)
  • Add docstring for the Terminator class (#4596)
  • Fix the build on Read the Docs by following optuna #4659 (https://github.com/optuna/optuna-integration/pull/20)
  • Add external packages to intersphinx_mapping in conf.py (#4290)
  • Minor fix of documents (#4360)
  • Fix a typo in MeanDecreaseImpurityImportanceEvaluator (#4385)
  • Update to Sphinx 6 (#4479)
  • Fix URL to the line of optuna-integration file (#4498)
  • Fix typo (#4515, thanks @gituser789!)
  • Resolve error in compiling PDF documents (#4605)
  • Add sphinxcontrib.jquery extension to conf.py (#4615)
  • Remove an example code of SkoptSampler (#4625)
  • Add links to the optuna-integration document (#4638)
  • Add manually written index page of tutorial (#4640)
  • Fix the build on Read the Docs (#4659)
  • Improve docstring of rank_plot function and its matplotlib version (#4660)
  • Add a link to tutorial of human-in-the-loop optimization (#4665)
  • Fix typo for progress bar in documentation (#4673, thanks @gituser789!)
  • Add docstrings to optuna.termintor (#4675)
  • Add docstring for plot_terminator_improvement (#4677)
  • Remove versionadded directives (#4681)
  • Add pareto front display example: 2D-plot from 3D-optimization including crop the scale (#4685, thanks @gituser789!)
  • Embed a YouTube video in the docstring of DaskStorage (#4694)
  • List Dashboard in navbar (#4708)
  • Fix docstring of terminator improvement for min_n_trials (#4709)

Examples

Tests

  • Suppress FutureWarning about Trial.set_system_attr in storage tests (#4323)
  • Add test for casting in test_nsgaii.py (#4387)
  • Fix the blocking issue on test_with_server.py (#4402)
  • Fix mypy error about Chainer (#4410)
  • Add unit tests for the _BoTorchGaussianProcess class (#4441)
  • Implement unit tests for optuna.terminator.improvement._preprocessing.py (#4506)
  • Fix mypy error about PyTorch Lightning (#4520)

Code Fixes

  • Simplify type annotations (https://github.com/optuna/optuna-integration/pull/10)
  • Copy _imports.py from optuna (https://github.com/optuna/optuna-integration/pull/16)
  • Refactor ParzenEstimator (#4183)
  • Fix mypy error abut AllenNLP in Checks (integration) (#4277)
  • Fix checks integration about pytorch lightning (#4322)
  • Minor refactoring of tests/hypervolume_tests/test_hssp.py (#4329)
  • Remove unnecessary sklearn version condition (#4379)
  • Support black 23.1.0 (#4382)
  • Warn unexpected search spaces for CmaEsSampler (#4395)
  • Fix flake8 errors on sklearn integration (#4407)
  • Fix mypy error about PyTorch Distributed (#4413)
  • Use numpy.polynomial in _erf.py (#4415)
  • Refactor _ParzenEstimator (#4433)
  • Simplify an argument's name of RegretBoundEvaluator (#4442)
  • Fix Checks(integration) about terminator/.../botorch.py (#4461)
  • Add an experimental decorator to RegretBoundEvaluator (#4469)
  • Add JSON serializable type (#4478)
  • Move optuna.samplers._search_space.group_decomposed.py to optuna.search_space.group_decomposed.py (#4491)
  • Simplify annotations in optuna.visualization (#4525, thanks @harupy!)
  • Simplify annotations in tests.visualization_tests (#4526, thanks @harupy!)
  • Remove unused instance variables in _BoTorchGaussianProcess (#4530)
  • Avoid deepcopy in optuna.visualization.plot_timeline (#4540)
  • Use SingleTaskGP for Optuna terminator (#4542)
  • Change deletion timing of optuna.samplers.IntersectionSearchSpace and optuna.samplers.intersection_search_space (#4549)
  • Remove IntersectionSearchSpace in optuna.terminator module (#4595)
  • Change arguments of BaseErrorEvaluator and classes that inherit from it (#4607)
  • Delete import Rectangle in visualization/matplotlib (#4620)
  • Simplify type annotations in visualize/_rank.py and visualization_tests/ (#4628)
  • Move the function _distribution_is_log to optuna.distributionsP from optuna/terminator/init.py` (#4668)
  • Separate _fast_non_dominated_sort() from the samplers (#4671)
  • Read trials from remote storage whenever get_all_trials of _CachedStorage is called (#4672)
  • Remove experimental label from _ProgressBar (#4684, thanks @tungbq!)

Continuous Integration

  • Fix coverage.yml (https://github.com/optuna/optuna-integration/pull/3)
  • Delete labeler.yaml (https://github.com/optuna/optuna-integration/pull/6)
  • Fix pypi publish.yaml (https://github.com/optuna/optuna-integration/pull/11)
  • Test on an arbitrary branch (https://github.com/optuna/optuna-integration/pull/15)
  • Fix the CI with AllenNLP (https://github.com/optuna/optuna-integration/pull/24)
  • Update actions/setup-python@v2 -> v4 (#4307, thanks @Kaushik-Iyer!)
  • Update action versions (#4328)
  • Update actions/setup-python in mac-tests (follow-up for #4307) (#4343)
  • Add type ignore to ProcessGroup import from torch.distributed (#4347)
  • Fix label of pypigh-action-pypi-publish (#4359)
  • [Hotfix] Avoid to install SQLAlchemy 2.0 on checks (#4364)
  • [Hotfix] Add version constriant on SQLAlchemy for tests storage with server (#4372)
  • Disable colored log when NO_COLOR env or not tty (#4376)
  • Output installed packages in Tests CI (#4381)
  • Output installed packages in mac-test CI (#4397)
  • Use ubuntu-latest in PyPI publish CI (#4400)
  • Output installed packages in Checks CI (#4417, thanks @Kaushik-Iyer!)
  • Output installed packages in Coverage CI (#4423, thanks @Kaushik-Iyer!)
  • Fix mypy error on checks-integration CI (#4424)
  • Fix mac-test cache path (#4425)
  • Add minimum version tests of numpy, tqdm, colorlog, PyYAML (#4428)
  • Remove ignore test_pytorch_lightning (#4432)
  • Use PyYAML==5.1 on tests-with-minimum-dependencies (#4435)
  • Remove trailing spaces in CI configs (#4439)
  • Output installed packages in all remaining CIs (#4445, thanks @Kaushik-Iyer!)
  • Add windows ci check (#4457)
  • Make mac-test executed on PRs (#4458)
  • Add sqlalchemy<2.0.0 in Checks(integration) (#4482)
  • Fix ci test conditions (#4496)
  • Deploy results of visual regression test on Netlify (#4507)
  • Pin pytorch lightning version (#4522)
  • Securely deploy results of visual regression test on Netlify (#4532)
  • Pin Distributed version (#4545)
  • Delete fragile heartbeat test (#4551)
  • Ignore AllenNLP test from Mac-CI (#4561)
  • Delete visual-regression.yml (#4597)
  • Remove dependency on codecov (#4606)
  • Install test in checks-integration CI (#4612)
  • Fix checks integration (#4617)
  • Add Output dependency tree by pipdeptree to Actions (#4624)
  • Add a version constraint on fakeredis (#4637)
  • Hotfix and run catboost test w/ python 3.11 except for MacOS (#4646)
  • Run mlflow with Python 3.11 (#4647)

Other

  • Update repository settings as in optuna/optuna (https://github.com/optuna/optuna-integration/pull/7)
  • Bump up version to v3.2.0.dev (#4345)
  • Remove cached-path from setup.py (#4357)
  • Revert a merge commit for #4183 (#4429)
  • Include both venv and .venv in the exclude setting of the formatters (#4476)
  • Replace hacking with flake8 (#4556)
  • Fix Codecov link (#4564)
  • Add lightning_logs to .gitignore (#4565)
  • Fix targets of black and isort in formats.sh (#4610)
  • Install benchmark, optional, and test in dev Docker image (#4611)
  • Provide kind error massage for missing optuna-integration (#4636)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@Alnusjaponica, @HideakiImamura, @Ilevk, @Jendker, @Kaushik-Iyer, @amylase, @c-bata, @contramundum53, @cross32768, @eukaryo, @g-votte, @gen740, @gituser789, @harupy, @himkt, @hvy, @jrbourbeau, @keisuke-umezawa, @keisukefukuda, @knshnb, @kstoneriv3, @li-li-github, @nomuramasahir0, @not522, @nzw0301, @toshihikoyanase, @tungbq

v3.1.1

1 year ago

This is the release note of v3.1.1.

Enhancements

  • [Backport] Import cmaes package lazily (#4573)

Bug Fixes

  • [Backport] Fix botorch dependency (#4569)
  • [Backport] Fix param_mask for multivariate TPE with constant_liar (#4570)
  • [Backport] Mitigate a blocking issue while running migrations with SQLAlchemy 2.0 (#4571)
  • [Backport] Fix bug of CMA-ES with margin on RDBStorage or JournalStorage (#4572)
  • [Backport] Fix RDBStorage.get_best_trial when there are infs (#4574)
  • [Backport] Fix CMA-ES Sampler (#4581)

Code Fixes

  • [Backport] Add types-tqdm for lint (#4566)

Other

  • Update version number to v3.1.1 (#4567)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@HideakiImamura, @contramundum53, @not522

v3.0.6

1 year ago

This is the release note of v3.0.6.

Installation

  • Fix a project metadata for scipy version constraint (#4494)

Other

  • Bump up version number to v3.0.6 (#4493)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@c-bata @HideakiImamura

v3.1.0

1 year ago

This is the release note of v3.1.0.

This is not something you have to read from top to bottom to learn about the summary of Optuna v3.1. The recommended way is reading the release blog.

Highlights

New Features

CMA-ES with Margin

CMA-ES CMA-ES with Margin
CMA-ES CMA-ESwM

“The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”

CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our CmaEsSampler, which makes it more efficient in these cases. You can see the benchmark results here. For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - arXiv”, which has been accepted for presentation at GECCO 2022.

import optuna
from optuna.samplers import CmaEsSampler

def objective(trial):
    x = trial.suggest_float("y", -10, 10, step=0.1)
    y = trial.suggest_int("x", -100, 100)
    return x**2 + y
 
study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
study.optimize(objective)

Distributed Optimization via NFS

JournalFileStorage, a file storage backend based on JournalStorage, supports NFS (Network File System) environments. It is the easiest option for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. #815, #1330, #1457 and #2216).

import optuna
from optuna.storages import JournalStorage, JournalFileStorage

def objective(trial):
    x = trial.suggest_float("x", -100, 100)
    y = trial.suggest_float("y", -100, 100)
    return x**2 + y
 
storage = JournalStorage(JournalFileStorage("./journal.log"))
study = optuna.create_study(storage=storage)
study.optimize(objective)

For more information on JournalFileStorage, see the blog post “Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage” written by @wattlebirdaz.

A Brand-New Redis Storage

We have replaced the Redis storage backend with a JournalStorage-based one. The experimental RedisStorage class has been removed in v3.1. The following example shows how to use the new JournalRedisStorage class.

import optuna
from optuna.storages import JournalStorage, JournalRedisStorage

def objective(trial):
    …
 
storage = JournalStorage(JournalRedisStorage("redis://localhost:6379"))
study = optuna.create_study(storage=storage)
study.optimize(objective)

Dask.distributed Integration

DaskStorage, a new storage backend based on Dask.distributed, is supported. It allows you to leverage distributed capabilities in similar APIs with concurrent.futures. DaskStorage can be used with InMemoryStorage, so you don't need to set up a database server. Here's a code example showing how to use DaskStorage:

import optuna
from optuna.storages import InMemoryStorage
from optuna.integration import DaskStorage
from distributed import Client, wait

def objective(trial):
    ...

with Client("192.168.1.8:8686") as client:
    study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
    futures = [
        client.submit(study.optimize, objective, n_trials=10, pure=False)
        for i in range(10)
    ]
    wait(futures)
    print(f"Best params: {study.best_params}")

Setting up a Dask cluster is easy: install dask and distributed, then run the dask scheduler and dask worker commands, as detailed in the Quick Start Guide in the Dask.distributed documentation.

$ pip install optuna dask distributed

$ dark scheduler
INFO - Scheduler at: tcp://192.168.1.8:8686
INFO - Dashboard at:                  :8687
…

$ dask worker tcp://192.168.1.8:8686
$ dask worker tcp://192.168.1.8:8686
$ dask worker tcp://192.168.1.8:8686

See the documentation for more information.

Brute-force Sampler

BruteForceSampler, a new sampler for brute-force search, tries all combinations of parameters. In contrast to GridSampler, it does not require passing the search space as an argument and works even with branches. This sampler constructs the search space with the define-by-run style, so it works by just adding sampler=optuna.samplers.BruteForceSampler().

import optuna

def objective(trial):
    c = trial.suggest_categorical("c", ["float", "int"])
    if c == "float":
        return trial.suggest_float("x", 1, 3, step=0.5)
    elif c == "int":
        a = trial.suggest_int("a", 1, 3)
        b = trial.suggest_int("b", a, 3)
        return a + b

study = optuna.create_study(sampler=optuna.samplers.BruteForceSampler())
study.optimize(objective)

Other Improvements

Bug Fix for TPE’s constant_liar Option

The constant_liar option of TPESampler is an option for the distributed optimization or batch optimization. It has been introduced in v2.8.0, but suffers from performance degradation in specific situations. In this release, we have detected the cause of the problem, and resolve it with fruitful performance verification. See #4073 for more details. 195521193-4a6971c7-d09d-4741-8762-59648e4149f6

Make Scipy Dependency Optional

50% time of import optuna is consumed by SciPy-related modules. Also, it consumes 110MB of storage space, which is really problematic in environments with limited resources such as serverless computing. We decided to implement scientific functions on our own to make the SciPy dependency optional. Thanks to contributors' effort on performance optimization, our implementation is as fast as the code with SciPy although ours is written in pure Python. See #4105 for more information. Note that QMCSampler still depends on SciPy. If you use QMCSampler, please explicitly specify SciPy as your dependency.

The New UI for Optuna Dashboard

new-ui-optuna-dashboard

We are developing a new UI for Optuna Dashboard that is available as an opt-in feature from the beta release - simply launch the dashboard as usual and click the link to the new UI. Please try it out and share your thoughts with us.

$ pip install "optuna-dashboard>=0.9.0b2"

Feedback Survey: The New UI for Optuna Dashboard

Change Supported Python Versions

We have changed the supported Python versions. Specifically, Python 3.6 has been removed from the supported versions and Python 3.11 has been added. See #3021 and #3964 for more details.

Breaking Changes

  • Allow users to call study.optimize() in multiple threads (#4068)
  • Use all trials in TPESampler even when multivariate=True (#4079)
  • Drop Python 3.6 (#4150)
  • Remove RedisStorage (#4156)
  • Deprecate set_system_attr in Study and Trial (#4188)
  • Add a directions arg to storage.create_new_study (#4189)
  • Deprecate system_attrs in Study class (#4250)
  • Deprecate Trial.system_attrs property method (#4264)
  • Remove device argument of TorchDistributedTrial (#4266)

New Features

  • Add Dask integration (#2023, thanks @jrbourbeau!)
  • Add journal-style log storage (#3854)
  • Support CMA-ES with margin in CmaEsSampler (#4016)
  • Add journal redis storage (#4086)
  • Add device argument to BoTorchSampler (#4101)
  • Add the feature to JournalStorage of Redis backend to resume from a snapshot (#4102)
  • TorchDistributedTrial uses group as parameter instead of device (#4106, thanks @reyoung!)
  • Added user_attrs to print by Optuna studies in cli.py (#4129, thanks @gonzaload!)
  • Add BruteForceSampler (#4132, thanks @semiexp!)
  • Add __getstate__ and __setstate__ to RedisStorage (#4135, thanks @shu65!)
  • Make journal redis storage picklable (#4139, thanks @shu65!)
  • Support for qNoisyExpectedHypervolumeImprovement acquisition function from Botorch (Issue#4014) (#4186)
  • Show best trial number and value in progress bar (#4205)

Enhancements

  • Change the log message format for failed trials (#3857, thanks @erentknn!)
  • Move default logic of get_trial_id_from_study_id_trial_number() method to BaseStorage (#3910)
  • Fix the data migration script for v3 release (#4020)
  • Convert search_space values of GridSampler explicitly (#4062)
  • Add single exception catch to study optimize (#4098)
  • Remove scipy dependencies from TPESampler (#4105)
  • Add validation in enqueue_trial (#4126)
  • Speed up tests/samplers_tests/test_nsgaii.py::test_fast_non_dominated_sort_with_constraints (#4128, thanks @mist714!)
  • Add getstate and setstate to journal storage (#4130, thanks @shu65!)
  • Support None in slice plot (#4133, thanks @belldandyxtq!)
  • Add marker to matplotlib plot_intermediate_value (#4134, thanks @belldandyxtq!)
  • Add overloads for type narrowing in suggest_categorical (#4143, thanks @ConnorBaker!)
  • Cache study.directions to reduce the number of get_study_directions() calls (#4146)
  • Add an in-memory cache in Trial class (#4240)
  • Use CMAwM class even when there is no discrete params (#4289)
  • Refer OPTUNA_STORAGE environment variable in Optuna CLI (#4299, thanks @Hakuyume!)
  • Apply @overload to ChainerMNTrial and TorchDistributedTrial (Follow-up of #4143) (#4300)
  • Make OPTUNA_STORAGE environment variable experimental (#4316)

Bug Fixes

  • Fix infinite loop bug in TPESampler (#3953, thanks @gasin!)
  • Fix GridSampler (#3957)
  • Fix an import error of sqlalchemy.orm.declarative_base (#3967)
  • Skip to add intermediate_value_type and value_type columns if exists (#4015)
  • Fix duplicated sampling of SkoptSampler (#4023)
  • Avoid parse errors of datetime.isoformat strings (#4025)
  • Fix a concurrency bug of JournalStorage set_trial_state_values (#4033)
  • Specify object type to numpy array init to avoid unintended str cast (#4035)
  • Make TPESampler reproducible (#4056)
  • Fix bugs in constant_liar option (#4073)
  • Add a flush to JournalFileStorage.append_logs (#4076)
  • Add a lock to MLflowCallback (#4097)
  • Reject deprecated distributions in OptunaSearchCV (#4120)
  • Stop using hash function in _get_bracket_id in HyperbandPruner (#4131, thanks @zaburo-ch!)
  • Validation for the parameter enqueued in to_internal_repr of FloatDistribution and IntDistribution (#4137)
  • Fix PartialFixedSampler to handle None correctly (#4147, thanks @halucinor!)
  • Fix the bug of JournalFileStorage on Windows (#4151)
  • Fix CmaEs system attribution key (#4184)
  • Skip constraint check for running trial (#4275)
  • Fix constrained optimization with TPESampler's constant_liar (#4325)
  • Fix import of ProcessGroup from torch.distributed (#4344)

Installation

  • Replace thop with fvcore (#3906)
  • Use the latest stable scipy (#3959, thanks @gasin!)
  • Remove GPyTorch version constraint (#3986)
  • Make typing_extensions optional (#3990)
  • Add version constraint on importlib-metadata (#4036)
  • Add a version constraint of matplotlib (#4044)

Documentation

  • Update cli tutorial (#3902)
  • Replace thop with fvcore (#3906)
  • Slightly improve docs of FrozenTrial (#3943)
  • Refine docs in BaseStorage (#3948)
  • Remove "Edit on GitHub" button from readthedocs (#3952)
  • Mention restoring sampler in saving/resuming tutorial (#3992)
  • Use log_loss instead of deprecated log since sklearn 1.1 (#3993)
  • Fix script path in benchmarks/README.md (#4021)
  • Ignore ConvergenceWarning in the ask-and-tell tutorial (#4032)
  • Update docs to let users know the concurrency problem on SQLite3 (#4034)
  • Fix the time complexity of NSGAIISampler (#4045)
  • Fix sampler comparison table (#4082)
  • Add BruteForceSampler in the samplers' list (#4152)
  • Remove markup from NaN in FAQ (#4155)
  • Remove the document of the multi_objective module (#4167)
  • Fix a typo in QMCSampler (#4179)
  • Introduce Optuna Dashboard in tutorial docs (#4226)
  • Remove RedisStorage from docstring (#4232)
  • Add the BruteForceSampler example to the document (#4244)
  • Improve the document of BruteForceSampler (#4245)
  • Fix an inline markup in distributed tutorial (#4247)
  • Fix a typo in BruteForceSampler (#4267)
  • Update FAQ (#4269)
  • Fix a typo in XGBoostPruningCallback (#4270)
  • Fix CMAEvolutionStrategy link in integration.PyCmaSampler document (#4284, thanks @hrntsm!)
  • Resolve warnings by sphinx with nitpicky option and fix typos (#4287)
  • Fix typos (#4291)
  • Improve the document of JournalStorage (#4308, thanks @hrntsm!)
  • Fix typo (#4332, thanks @Jasha10!)
  • Fix docstring in optuna/integration/dask.py (#4333)
  • Mention suggest_float in BruteForceSampler (#4334)
  • Remove verbose_eval argument from lightgbm callback in tutorial pages (#4335)
  • Use Sphinx 5 until sphinx_rtd_theme supports Sphinx 6 (#4341)

Examples

Tests

  • Suppress warnings in tests/test_distributions.py (#3912)
  • Suppress warnings and minor code fixes in tests/trial_tests (#3914)
  • Reduce warning messages by tests/study_tests/ (#3915)
  • Remove dynamic search space based objective from a parallel job test (#3916)
  • Remove all warning messages from tests/integration_tests/test_sklearn.py (#3922)
  • Remove out-of-range related warning messages from MLflowCallback and WeightsAndBiasesCallback (#3923)
  • Ignore RuntimeWarning when nanmin and nanmax take an array only containing nan values from pruners_tests (#3924)
  • Remove warning messages from test files for pytorch_distributed and chainermn modules (#3927)
  • Remove warning messages from tests/integration_tests/test_lightgbm.py (#3944)
  • Resolve warnings in tests/visualization_tests/test_contour.py (#3954)
  • Reduced warning messages from tests/visualization_tests/test_slice.py (#3970, thanks @jmsykes83!)
  • Remove warning from a few visualization tests (#3989)
  • Deselect integration tests in Tests CI (#4013)
  • Remove warnings from tests/visualization_tests/test_optimization_history.py (#4024)
  • Unset PYTHONHASHSEED for the hash-depedenet test (#4031)
  • Call study.tell from another process (#4039, thanks @Abelarm!)
  • Improve test for heartbeat: Add test for the case that trial state should be kept running (#4055)
  • Remove warnings in the test of Pareto front (#4072)
  • Remove matplotlib's get_cmap warning from tests/visualization_tests/test_param_importances.py (#4095)
  • Reduce tests' n_trials for CI time reduction (#4117)
  • Skip test_pop_waiting_trial_thread_safe on RedisStorage (#4119)
  • Simplify the test of BruteForceSampler for infinite search space (#4153)
  • Add sep-CMA-ES in parametrize_sampler (#4154)
  • Fix a broken test for dask.distributed integration (#4170)
  • Add DaskStorage to existing storage tests (#4176, thanks @jrbourbeau!)
  • Fix a test error in test_catboost.py (#4190)
  • Remove test/integration_tests/test_sampler.py (#4204)
  • Fix mypy error abut PyTorch Lightning in Checks (integration) (#4279)
  • Remove OPTUNA_STORAGE environment variable to check missing storage errors (#4306)
  • Use Trial not FrozenTrial in a test of WeightsAndBiasesCallback (#4309)
  • Activate a test case for _set_alembic_revision (#4319)
  • Add test to check error_score is stored (#4337)

Code Fixes

  • Refactor _tell.py (#3841)
  • Make log message user-friendly when objective returns a sequence of unsupported values (#3868)
  • Gather mask of None parameter in TPESampler (#3886)
  • Migrate CLI from cliff to argparse (#4100)
  • Enable mypy --no-implicit-reexport option (#4110)
  • Remove unused function: find_any_distribution (#4127)
  • Remove object inheritance from base classes (#4161)
  • Use mlflow 2.0.1 syntax (#4173)
  • Simplify implementation of _preprocess_argv in CLI (#4187)
  • Move _solve_hssp to _hypervolume/utils.py (#4227, thanks @jpbianchi!)
  • Add tests for CmaEsSampler (#4233)
  • Remove an obsoleted logic from CmaEsSampler (#4239)
  • Avoid to decode log string in JournalRedisStorage (#4246)
  • Fix a typo in TorchDistributedTrial (#4271)
  • Fix mypy error abut Chainer in Checks (integration) (#4276)
  • Fix mypy error abut BoTorch in Checks (integration) (#4278)
  • Fix mypy error abut dask.py in Checks (integration) (#4280)
  • Avoid to use features that will be removed in SQLAlchemy v2.0 (#4304)

Continuous Integration

  • Hotfix botorch module by adding the version constraint of gpytorch (#3950)
  • Drop Python 3.6 from integration CIs (#3983)
  • Use PyTorch 1.11 for consistency and fix a typo (#3987)
  • Support Python 3.11 (#4018)
  • Remove # type: ignore for mypy 0.981 (#4019)
  • Fix metric inconsistency between bayesmark plots and report (#4077)
  • Pin Ubuntu version to 20.04 in Tests and Tests (Storage with server) (#4118)
  • Add workflow to test Optuna with lower versions of constraints (#4125)
  • Mark some tests slow and ignore in pull request trigger (#4138, thanks @mist714!)
  • Allow display names to be changed in benchmark scripts (Issue #4017) (#4145)
  • Disable scheduled workflow runs in forks (#4159)
  • Remove the CircleCI job document (#4160)
  • Stop running reproducibility tests on CI for PR (#4162)
  • Stop running reproducibility tests for coverage (#4163)
  • Add workflow_dispatch trigger to the integration tests (#4166)
  • Fix CI errors when using mlflow==2.0.1 (#4171)
  • Add fakeredis in benchmark dependencies (#4177)
  • Fix asv speed benchmark (#4185)
  • Skip tests with minimum version for Python 3.10 and 3.11 (#4199)
  • Split normal tests and tests with minimum versions (#4200)
  • Update action/checkout@v2 -> v3 (#4206)
  • Update actions/cache@v2 -> v3 (#4207)
  • Update actions/stale@v5 -> v6 (#4208)
  • Pin botorch to avoid CI failure (#4228)
  • Add the pytest dependency for asv (#4243)
  • Fix mypy error about pytorch_distributed.py in Checks (integration) (#4281)
  • Run test_pytorch_distributed.py again (#4301)
  • Remove all CircleCI config (#4315)
  • Update minimum version of cmaes (#4321)
  • Add Python 3.11 to integration CI (#4327)

Other

  • Bump up version number to 3.1.0.dev (#3934)
  • Remove the news section on README (#3940)
  • Add issue template for code fix (#3968)
  • Close stale issues immediately after labeling stale (#4071)
  • Remove tox.ini (#4078)
  • Replace gitter with GitHub Discussions (#4083)
  • Deprecate description-checked label (#4090)
  • Make days-before-issue-stale 300 days (#4091)
  • Unnecessary space removed (#4109, thanks @gonzaload!)
  • Add note not to share pickle files in bug reports (#4212)
  • Update the description of optuna-dashboard on README (#4217)
  • Remove optuna.TYPE_CHECKING (#4238)
  • Bump up version to v3.1.0-b0 (#4262)
  • Remove the list of examples from examples/README (#4283)
  • Exclude benchmark directories from the sdist package (#4318)
  • Bump up version number to 3.1.0 (#4346)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@Abelarm, @Alnusjaponica, @ConnorBaker, @Hakuyume, @HideakiImamura, @Jasha10, @amylase, @belldandyxtq, @c-bata, @contramundum53, @cross32768, @erentknn, @eukaryo, @g-votte, @gasin, @gen740, @gonzaload, @halucinor, @himkt, @hrntsm, @hvy, @jmsykes83, @jpbianchi, @jrbourbeau, @keisuke-umezawa, @knshnb, @mist714, @ncclementi, @not522, @nzw0301, @rene-rex, @reyoung, @semiexp, @shu65, @sile, @toshihikoyanase, @wattlebirdaz, @xadrianzetx, @zaburo-ch

v3.1.0-b0

1 year ago

This is the release note of v3.1.0-b0.

Highlights

CMA-ES with Margin support

CMA-ES CMA-ES with Margin
CMA-ES CMA-ESwM

“The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”

CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our CmaEsSampler, which makes it more efficient in these cases. You can see the benchmark results here. For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - arXiv”, which has been accepted for presentation at GECCO 2022.

import optuna
from optuna.samplers import CmaEsSampler

def objective(trial):
    x = trial.suggest_float("y", -10, 10, step=0.1)
    y = trial.suggest_int("x", -100, 100)
    return x**2 + y
 
study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
study.optimize(objective)

Distributed Optimization via NFS

JournalFileStorage, a file storage backend based on JournalStorage, supports NFS (Network File System) environments. It is the easiest option for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. #815, #1330, #1457 and #2216).

import optuna
from optuna.storages import JournalStorage, JournalFileStorage

def objective(trial):
    x = trial.suggest_float("x", -100, 100)
    y = trial.suggest_float("y", -100, 100)
    return x**2 + y
 
storage = JournalStorage(JournalFileStorage("./journal.log"))
study = optuna.create_study(storage=storage)
study.optimize(objective)

For more information on JournalFileStorage, see the blog post “Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage” written by @wattlebirdaz.

Dask Integration

DaskStorage, a new storage backend based on Dask.distributed, is supported. It enables distributed computing in similar APIs with concurrent.futures. An example code is like the following (The full example code is available in the optuna-examples repository).

import optuna
from optuna.storages import InMemoryStorage
from optuna.integration import DaskStorage
from distributed import Client, wait

def objective(trial):
    ...

with Client("192.168.1.8:8686") as client:
    study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
    futures = [
        client.submit(study.optimize, objective, n_trials=10, pure=False)
        for i in range(10)
    ]
    wait(futures)
    print(f"Best params: {study.best_params}")

One of the interesting aspects is the availability of InMemoryStorage. You don’t need to set up database servers for distributed optimization. Although you still need to set up the Dask.distributed cluster, it’s quite easy like the following. See Quickstart of the Dask.distributed documentation for more details.

$ pip install optuna dask distributed

$ dark-scheduler
INFO - Scheduler at: tcp://192.168.1.8:8686
INFO - Dashboard at:                  :8687
…

$ dask-worker tcp://192.168.1.8:8686
$ dask-worker tcp://192.168.1.8:8686
$ dask-worker tcp://192.168.1.8:8686

$ python dask_simple.py

A brand-new Redis storage

We have replaced the Redis storage backend with a JournalStorage-based one. The experimental RedisStorage class has been removed in v3.1. The following example shows how to use the new JournalRedisStorage class.

import optuna
from optuna.storages import JournalStorage, JournalRedisStorage

def objective(trial):
    …
 
storage = JournalStorage(JournalRedisStorage("redis://localhost:6379"))
study = optuna.create_study(storage=storage)
study.optimize(objective)

BruteForceSampler, a new sampler for brute-force search, tries all combinations of parameters. In contrast to GridSampler, it does not require passing the search space as an argument and works even with branches. This sampler constructs the search space with the define-by-run style, so it works by just adding sampler=optuna.samplers.BruteForceSampler().

import optuna

def objective(trial):
    c = trial.suggest_categorical("c", ["float", "int"])
    if c == "float":
        return trial.suggest_float("x", 1, 3, step=0.5)
    elif c == "int":
        a = trial.suggest_int("a", 1, 3)
        b = trial.suggest_int("b", a, 3)
        return a + b

study = optuna.create_study(sampler=optuna.samplers.BruteForceSampler())
study.optimize(objective)

Breaking Changes

  • Allow users to call study.optimize() in multiple threads (#4068)
  • Use all trials in TPESampler even when multivariate=True (#4079)
  • Drop Python 3.6 (#4150)
  • Remove RedisStorage (#4156)
  • Deprecate set_system_attr in Study and Trial (#4188)
  • Deprecate system_attrs in Study class (#4250)

New Features

  • Add Dask integration (#2023, thanks @jrbourbeau!)
  • Add journal-style log storage (#3854)
  • Support CMA-ES with margin in CmaEsSampler (#4016)
  • Add journal redis storage (#4086)
  • Add device argument to BoTorchSampler (#4101)
  • Add the feature to JournalStorage of Redis backend to resume from a snapshot (#4102)
  • Added user_attrs to print by optuna studies in cli.py (#4129, thanks @gonzaload!)
  • Add BruteForceSampler (#4132, thanks @semiexp!)
  • Add __getstate__ and __setstate__ to RedisStorage (#4135, thanks @shu65!)
  • Support pickle in JournalRedisStorage (#4139, thanks @shu65!)
  • Support for qNoisyExpectedHypervolumeImprovement acquisition function from BoTorch (Issue#4014) (#4186)

Enhancements

  • Change the log message format for failed trials (#3857, thanks @erentknn!)
  • Move default logic of get_trial_id_from_study_id_trial_number() method to BaseStorage (#3910)
  • Fix the data migration script for v3 release (#4020)
  • Convert search_space values of GridSampler explicitly (#4062)
  • Add single exception catch to study optimize (#4098)
  • Add validation in enqueue_trial (#4126)
  • Speed up tests/samplers_tests/test_nsgaii.py::test_fast_non_dominated_sort_with_constraints (#4128, thanks @mist714!)
  • Add getstate and setstate to journal storage (#4130, thanks @shu65!)
  • Support None in slice plot (#4133, thanks @belldandyxtq!)
  • Add marker to matplotlib plot_intermediate_value (#4134, thanks @belldandyxtq!)
  • Cache study.directions to reduce the number of get_study_directions() calls (#4146)
  • Add an in-memory cache in Trial class (#4240)

Bug Fixes

  • Fix infinite loop bug in TPESampler (#3953, thanks @gasin!)
  • Fix GridSampler (#3957)
  • Fix an import error of sqlalchemy.orm.declarative_base (#3967)
  • Skip to add intermediate_value_type and value_type columns if exists (#4015)
  • Fix duplicated sampling of SkoptSampler (#4023)
  • Avoid parse errors of datetime.isoformat strings (#4025)
  • Fix a concurrency bug of JournalStorage set_trial_state_values (#4033)
  • Specify object type to numpy array init to avoid unintended str cast (#4035)
  • Make TPESampler reproducible (#4056)
  • Fix bugs in constant_liar option (#4073)
  • Add a flush to JournalFileStorage.append_logs (#4076)
  • Add a lock to MLflowCallback (#4097)
  • Reject deprecated distributions in OptunaSearchCV (#4120)
  • Stop using hash function in _get_bracket_id in HyperbandPruner (#4131, thanks @zaburo-ch!)
  • Validation for the parameter enqueued in to_internal_repr of FloatDistribution and IntDistribution (#4137)
  • Fix PartialFixedSampler to handle None correctly (#4147, thanks @halucinor!)
  • Fix the bug of JournalFileStorage on Windows (#4151)
  • Fix CmaEs system attribution key (#4184)

Installation

  • Replace thop with fvcore (#3906)
  • Use the latest stable scipy (#3959, thanks @gasin!)
  • Remove GPyTorch version constraint (#3986)
  • Make typing_extensions optional (#3990)
  • Add version constraint on importlib-metadata (#4036)
  • Add a version constraint of matplotlib (#4044)

Documentation

  • Update cli tutorial (#3902)
  • Replace thop with fvcore (#3906)
  • Slightly improve docs of FrozenTrial (#3943)
  • Refine docs in BaseStorage (#3948)
  • Remove "Edit on GitHub" button from readthedocs (#3952)
  • Mention restoring sampler in saving/resuming tutorial (#3992)
  • Use log_loss instead of deprecated log since sklearn 1.1 (#3993)
  • Fix script path in benchmarks/README.md (#4021)
  • Ignore ConvergenceWarning in the ask-and-tell tutorial (#4032)
  • Update docs to let users know the concurrency problem on SQLite3 (#4034)
  • Fix the time complexity of NSGAIISampler (#4045)
  • Fix sampler comparison table (#4082)
  • Add BruteForceSampler in the samplers' list (#4152)
  • Remove markup from NaN in FAQ (#4155)
  • Remove the document of the multi_objective module (#4167)
  • Fix a typo in QMCSampler (#4179)
  • Introduce Optuna Dashboard in tutorial docs (#4226)
  • Remove RedisStorage from docstring (#4232)
  • Add the BruteForceSampler example to the document (#4244)
  • Improve the document of BruteForceSampler (#4245)
  • Fix an inline markup in distributed tutorial (#4247)

Examples

Tests

  • Suppress warnings in tests/test_distributions.py (#3912)
  • Suppress warnings and minor code fixes in tests/trial_tests (#3914)
  • Reduce warning messages by tests/study_tests/ (#3915)
  • Remove dynamic search space based objective from a parallel job test (#3916)
  • Remove all warning messages from tests/integration_tests/test_sklearn.py (#3922)
  • Remove out-of-range related warning messages from MLflowCallback and WeightsAndBiasesCallback (#3923)
  • Ignore RuntimeWarning when nanmin and nanmax take an array only containing nan values from pruners_tests (#3924)
  • Remove warning messages from test files for pytorch_distributed and chainermn modules (#3927)
  • Remove warning messages from tests/integration_tests/test_lightgbm.py (#3944)
  • Resolve warnings in tests/visualization_tests/test_contour.py (#3954)
  • Reduced warning messages from tests/visualization_tests/test_slice.py (#3970, thanks @jmsykes83!)
  • Remove warning from a few visualizaiton tests (#3989)
  • Deselect integration tests in Tests CI (#4013)
  • Remove warnings from tests/visualization_tests/test_optimization_history.py (#4024)
  • Unset PYTHONHASHSEED for the hash-depedenet test (#4031)
  • Test: calling study.tell from another process (#4039, thanks @Abelarm!)
  • Improve test for heartbeat: Add test for the case that trial state should be kept running (#4055)
  • Remove warnings in the test of Paretopereto front (#4072)
  • Remove matplotlib get_cmap warning from tests/visualization_tests/test_param_importances.py (#4095)
  • Reduce tests' n_trials for CI time reduction (#4117)
  • Skip test_pop_waiting_trial_thread_safe on RedisStorage (#4119)
  • Simplify the test of BruteForceSampler for infinite search space (#4153)
  • Add sep-CMA-ES in parametrize_sampler (#4154)
  • Fix a broken test for dask.distributed integration (#4170)
  • Add DaskStorage to existing storage tests (#4176, thanks @jrbourbeau!)
  • Fix a test error in test_catboost.py (#4190)
  • Remove test/integration_tests/test_sampler.py (#4204)

Code Fixes

  • Refactor _tell.py (#3841)
  • Make log message user-friendly when objective returns a sequence of unsupported values (#3868)
  • Gather mask of None parameter in TPESampler (#3886)
  • Update cli tutorial (#3902)
  • Migrate CLI from cliff to argparse (#4100)
  • Enable mypy --no-implicit-reexport option (#4110)
  • Remove unused function: find_any_distribution (#4127)
  • Remove object inheritance from base classes (#4161)
  • Use mlflow 2.0.1 syntax (#4173)
  • Simplify implementation of _preprocess_argv in CLI (#4187)
  • Move _solve_hssp to _hypervolume/utils.py (#4227, thanks @jpbianchi!)
  • Avoid to decode log string in JournalRedisStorage (#4246)

Continuous Integration

  • Hotfix botorch module by adding the version constraint of gpytorch (#3950)
  • Drop python 3.6 from integration CIs (#3983)
  • Use PyTorch 1.11 for consistency and fix a typo (#3987)
  • Support Python 3.11 (#4018)
  • Remove # type: ignore for mypy 0.981 (#4019)
  • Fix metric inconsistency between bayesmark plots and report (#4077)
  • Pin Ubuntu version to 20.04 in Tests and Tests (Storage with server) (#4118)
  • Add workflow to test Optuna with lower versions of constraints (#4125)
  • Mark some tests slow and ignore in pull request trigger (#4138, thanks @mist714!)
  • Allow display names to be changed in benchmark scripts (Issue #4017) (#4145)
  • Disable scheduled workflow runs in forks (#4159)
  • Remove the CircleCI job document (#4160)
  • Stop running reproducibility tests on CI for PR (#4162)
  • Stop running reproducibility tests for coverage (#4163)
  • Add workflow_dispatch trigger to the integration tests (#4166)
  • [hotfix] Fix CI errors when using mlflow==2.0.1 (#4171)
  • Add fakeredis in benchmark deps (#4177)
  • Fix asv speed benchmark (#4185)
  • Skip tests with minimum version for Python 3.10 and 3.11 (#4199)
  • Split normal tests and tests with minimum versions (#4200)
  • Update action/checkout@v2 -> v3 (#4206)
  • Update actions/stale@v5 -> v6 (#4208)
  • Pin botorch to avoid CI failure (#4228)
  • Add the pytest dependency for asv (#4243)

Other

  • Bump up version number to 3.1.0.dev (#3934)
  • Remove the news section on README (#3940)
  • Add issue template for code fix (#3968)
  • Close stale issues immediately after labeling stale (#4071)
  • Remove tox.ini (#4078)
  • Replace gitter with GitHub Discussions (#4083)
  • Deprecate description-checked label (#4090)
  • Make days-before-issue-stale 300 days (#4091)
  • Unnecessary space removed (#4109, thanks @gonzaload!)
  • Add note not to share pickle files in bug reports (#4212)
  • Update the description of optuna-dashboard on README (#4217)
  • Remove optuna.TYPE_CHECKING (#4238)
  • Bump up version to v3.1.0-b0 (#4262)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@Abelarm, @Alnusjaponica, @HideakiImamura, @amylase, @belldandyxtq, @c-bata, @contramundum53, @cross32768, @erentknn, @eukaryo, @g-votte, @gasin, @gen740, @gonzaload, @halucinor, @himkt, @hvy, @jmsykes83, @jpbianchi, @jrbourbeau, @keisuke-umezawa, @knshnb, @mist714, @ncclementi, @not522, @nzw0301, @rene-rex, @semiexp, @shu65, @sile, @toshihikoyanase, @wattlebirdaz, @xadrianzetx, @zaburo-ch