A hyperparameter optimization framework
This is the release note of v3.6.1.
average_is_best
implementation in WilcoxonPruner
(#5373)This release was made possible by the authors and the people who participated in the reviews and discussions.
@HideakiImamura, @eukaryo, @nabenabe0928
This is the release note of v3.6.0.
Optuna 3.6 newly supports the following new features. See our release blog for more detailed information.
optuna.terminator
using optuna._gp
(#5241)These migration-related PRs do not break the backward compatibility as long as optuna-integration v3.6.0 or later is installed in your environment.
optuna-integration
(#5161, thanks @dheemantha-bhat!)sklearn
integration (#5225)SkoptSampler
(#5234)cma
integration (#5236)wandb
integration (#5237)sklearn
integration (https://github.com/optuna/optuna-integration/pull/66)SkoptSampler
(https://github.com/optuna/optuna-integration/pull/74)pycma
integration (https://github.com/optuna/optuna-integration/pull/77)MLflow
integration (https://github.com/optuna/optuna-integration/pull/84)GPSampler
(#5185)formats.sh
based on optuna/master
(https://github.com/optuna/optuna-integration/pull/75)TypeError
if params
is not a dict
in enqueue_trial
(#5164, thanks @adjeiv!)FrozenTrial._validate()
(#5211)optuna._gp
(#5224)GPSampler
(#5274)GPSampler
performance other than introducing local search (#5279)README.md
(https://github.com/optuna/optuna-integration/pull/88)LightGBMTuner
test (https://github.com/optuna/optuna-integration/pull/89)JSONDecodeError
in JournalStorage
(#5195)gp.fit_kernel_params
more robust (#5247)study.tell
(#5269, thanks @ryota717!)_split_trials
of TPESampler
for constrained optimization with constant liar (#5298)study optimize
from CLI tutorial page (#5152)GridSampler
with ask-and-tell interface (#5153)faq.rst
(#5170)plotly.graph_objs
with plotly.graph_objects
(#5223)optuna.terminator
module (#5243, thanks @HarshitNagpal29!)lightgbm
dependency in visualization tutorial (#5257)Specify Hyperparameters Manually
tutorial page (#5258)n_trials>10000
(#5310)PedAnovaImportanceEvaluator
(#5312)WilcoxonPruner
(#5313)WilcoxonPruner
(#5315)-pre
option in the rl
integration (https://github.com/optuna/optuna-examples/pull/243)dask
and tensorflow
(https://github.com/optuna/optuna-examples/pull/245)_create_frozen_trial()
under testing
module (#5157)__init__.py
and fix its documentation generation (https://github.com/optuna/optuna-integration/pull/71)optuna.integration
with optuna_integration
in the doc and the issue template (https://github.com/optuna/optuna-integration/pull/73)__init__.py
(https://github.com/optuna/optuna-integration/pull/86)KerasPruningCallback
(https://github.com/optuna/optuna-integration/pull/93)UserWarning
by tests/test_keras.py
(https://github.com/optuna/optuna-integration/pull/94)TPESampler
for more clarity before c-TPE integration (#5117)Checks(integration)
failure (#5167)_ParzenEstimatorParameters
to more modern style (#5193)optuna/study/_optimize.py
(#5261, thanks @shahpratham!)plot_timeline
test (#5281)black 24.*
(https://github.com/optuna/optuna-integration/pull/64)botorch<0.10.
for CI failures (https://github.com/optuna/optuna-integration/pull/96)Checks (Integration)
CI (#5217)test_reproducible_in_other_process
for GPSampler
with Python 3.12 (#5251)fakeredis
(#5307)labeler.yml
to disable the triage
action (#5240)This release was made possible by the authors and the people who participated in the reviews and discussions.
@Alnusjaponica, @DanielAvdar, @HarshitNagpal29, @HideakiImamura, @SimonPop, @adjeiv, @buruzaemon, @c-bata, @contramundum53, @dheemantha-bhat, @eukaryo, @gen740, @hrntsm, @knshnb, @nabenabe0928, @not522, @nzw0301, @porink0424, @ryota717, @shahpratham, @toshihikoyanase, @y0z
This is the release note of v3.5.0.
This is a maintenance release with various bug fixes and improvements to the documentation and more.
n_objectives
condition to be greater than 4 in candidates functions (#5121, thanks @adjeiv!)constant_liar
in multi-objective TPESampler
(#5021)optuna study-names
cli (#5029)ExpectedHypervolumeImprovement
candidates function for BotorchSampler
(#5065, thanks @adjeiv!)botorch.py
(#5094, thanks @sousu4!)OptunaSearchCV
(#5098, thanks @adjeiv!)constant_liar
in multi-objective TPESampler
(#5021)plot_contour
(#5107)NSGAIIChildGenerationStrategy
(#5003)trials
for above in MO split when n_below=0
(#5079)logpdf
for scaled truncnorm
(#5110)LightGBM
tuner and separate train()
from __init__.py
(#5010)HyperbandPruner
(#5075, thanks @felix-cw!)MOTPESampler
from index.rst
file (#5084, thanks @Ashhar-24!)MOTPESampler
to the doc (#5086)README.md
to fix the installation and integration (#5126)Recommended budgets
include n_startup_trials
(#5137)jax
and jaxlib
(https://github.com/optuna/optuna-examples/pull/223)optuna/optuna-dashboard
(https://github.com/optuna/optuna-examples/pull/224)OptunaSearchCV
with terminator (https://github.com/optuna/optuna-examples/pull/225)tests/study_tests/test_study.py
(#5070, thanks @sousu4!)PyTorchLightning
(#5028)Any
with float
in _TreeNode.children
(#5040, thanks @aanghelidi!)typing.py
(#5054, thanks @jot-s-bindra!)tests/storages_tests/test_heartbeat.py
(#5066, thanks @sousu4!)frozen.py
(#5080, thanks @Vaibhav101203!)dataframe.py
(#5081, thanks @Vaibhav101203!)test_tensorflow
in Python 3.11 (https://github.com/optuna/optuna-integration/pull/46)type: ignore
(#5047)tests-mpi
to the oldest and latest Python versions (#5067)tests-mpi
(#5100)should-skip
to test-trigger-type
for more clarity (#5134)Pin the version of PyQt6-Qt6
(#5140)README.md
(#5108)!examples
from .dockerignore
(#5129)This release was made possible by the authors and the people who participated in the reviews and discussions.
@Alnusjaponica, @Ashhar-24, @Guillaume227, @HideakiImamura, @JustinGoheen, @Vaibhav101203, @aanghelidi, @adjeiv, @c-bata, @contramundum53, @eukaryo, @felix-cw, @gen740, @jot-s-bindra, @keisuke-umezawa, @knshnb, @nabenabe0928, @not522, @nzw0301, @p1kit, @sousu4, @toshihikoyanase, @y-kamiya
This is the release note of v3.4.0.
Optuna 3.4 newly supports the following new features. See our release blog for more detailed information.
LightGBM>=4.0
(#4844)SkoptSampler
(#4913)get_all_study_names()
(#4898)plot_rank
(#4899, thanks @ryota717!)TPESampler
(#4926)metric_names
getter to study (#4930)GCSArtifactStore
(#4967, thanks @semiexp!)BestValueStagnationEvaluator
(#4974, thanks @smygw72!)_parallel_coordinate.py
when log scale (#4911)fail_stale_trials
with race condition (#4886)RandomSampler
(#4970, thanks @shu65!)min_child_samples
(#5007)BruteForceSampler
in parallel optimization (#5022)_filesystem.py
(#4909)optuna-fast-fanova
in documents (#4943)Boto3ArtifactStore
's docstring (#4957)JournalStorage
(#4980, thanks @semiexp!)ArtifactNotFound
(#4982, thanks @smygw72!)n_trials
in test_combination_of_different_distributions_objective
(#4950)pytest-xdist
(#4999)isinstance
instead of if type() is ...
(#4896)cmaes
dependency optional (#4901)before_trial
(#4914)_grid.py
(#4918)checks-integration
errors on LightGBMTuner (#4923)botorch
method to remove warning (#4940)_split_trials
instead of _get_observation_pairs
and _split_observation_pairs
(#4947)__future__.annotations
in optuna/visualization/_optimization_history.py
(#4964, thanks @YuigaWada!)optuna/visualization/_hypervolume_history.py
(#4965, thanks @RuTiO2le!)optuna/_convert_positional_args.py
(#4966, thanks @hamster-86!)SQLAlchemy
(#4968)collections.abc
in optuna/visualization/_edf.py
(#4969, thanks @g-tamaki!)collections.abc
in plot pareto front (#4971)experimental_func
from metric_names
property (#4983, thanks @semiexp!)__future__.annotations
to progress_bar.py
(#4992)optuna/optuna/visualization/matplotlib/_optimization_history.py
(#5015, thanks @sousu4!)asv
0.6.0 (#4882)tests-mpi
(#4998)README.md
(https://github.com/optuna/optuna-integration/pull/39)FUNDING.yml
(#4912)optional-dependencies
and document deselecting integration tests in CONTRIBUTING.md
(#4962)This release was made possible by the authors and the people who participated in the reviews and discussions.
@Alnusjaponica, @HideakiImamura, @RuTiO2le, @YuigaWada, @adjeiv, @c-bata, @ciffelia, @contramundum53, @cross32768, @eukaryo, @g-tamaki, @g-votte, @gen740, @hamster-86, @hrntsm, @hvy, @keisuke-umezawa, @knshnb, @lucasmrdt, @louis-she, @moririn2528, @nabenabe0928, @not522, @nzw0301, @ryota717, @semiexp, @shu65, @smygw72, @sousu4, @torotoki, @toshihikoyanase, @xadrianzetx
This is the release note of v3.3.0.
A new variant of CMA-ES has been added. By setting the lr_adapt
argument to True
in CmaEsSampler
, you can utilize it. For multimodal and/or noisy problems, adapting the learning rate can help avoid getting trapped in local optima. For more details, please refer to #4817. We want to thank @nomuramasahir0, one of the authors of LRA-CMA-ES, for his great work and the development of cmaes library.
In multiobjective optimization, the history of hypervolume is commonly used as an indicator of performance. Optuna now supports this feature in the visualization module. Thanks to @y0z for your great work!
Plotly | matplotlib |
---|---|
Some samplers support constrained optimization, however, many other features cannot handle it. We are continuously enhancing support for constraints. In this release, plot_optimization_history
starts to consider constraint violations. Thanks to @hrntsm for your great work!
import optuna
def objective(trial):
x = trial.suggest_float("x", -15, 30)
y = trial.suggest_float("y", -15, 30)
v0 = 4 * x**2 + 4 * y**2
trial.set_user_attr("constraint", [1000 - v0])
return v0
def constraints_func(trial):
return trial.user_attrs["constraint"]
sampler = optuna.samplers.TPESampler(constraints_func=constraints_func)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
fig = optuna.visualization.plot_optimization_history(study)
fig.show()
Optuna Dashboard v0.11.0 provides the tight integration with Streamlit framework. By using this feature, you can create your own application for human-in-the-loop optimization. Please check out the documentation and the example for details.
ordered_dict
argument from IntersectionSearchSpace
(#4846)logei_candidate_func
and make it default when available (#4667)JournalFileStorage
and JournalRedisStorage
on CLI (#4696)cv_results_
to OptunaSearchCV
(#4751, thanks @jckkvs!)optuna.integration.botorch.qnei_candidates_func
(#4753, thanks @kstoneriv3!)plotly
backend (#4757, thanks @y0z!)FileSystemArtifactStore
(#4763)_optimization_history_plot
(#4793, thanks @hrntsm!)LightGBM
version to v4.0.0 (#4810)matplotlib._optimization_history_plot
(#4816, thanks @hrntsm!)upload_artifact
api (#4823)before_trial
(#4825)Boto3ArtifactStore
(#4840)logpdf
in _truncnorm.py
(#4712)erf
(#4713)get_all_trials
in InMemoryStorage
(#4716)BruteForceSampler
consider failed trials (#4747)_get_latest_trial
(#4774)plot_hypervolume_history
(#4776)BruteForceSampler
for pruned trials (#4720)plot_slice
bug when some of the choices are numeric (#4724)LightGBMTuner
reproducible (#4795)jquery-extension
(#4691)plot_rank
and plot_timeline
plots to visualization tutorial (#4735)integration/sklearn.py
(#4745)study.n_objectives
from document (#4796)sphinx_rtd_theme
(#4853)LICENSE
file (https://github.com/optuna/optuna-examples/pull/200)pytestmark
(https://github.com/optuna/optuna-integration/pull/29)GridSampler
test for failed trials (#4721)OptunaSearchCV
behavior (#4758)test_log_gass_mass
with SciPy 1.11.0 (#4766)benchmarks
(#4703, thanks @caprest!)TPESampler
(#4717)_get_observation_pairs
(#4742)early_stopping_rounds
(#4752)_fast_non_dominated_sort()
(#4759)after_trial
strategy (#4760)TPESampler
(#4769)pkg_resources
(#4770)_calculate_weights_below_for_multi_objective
(#4773)_study_id
parameter from Trial
class (#4811, thanks @adjeiv!)OrderedDict
(#4838, thanks @taniokay!)samplers._search_space.IntersectionSearchSpace
(#4857)tests-integration
(#4784)type:ignore
s (#4787)This release was made possible by the authors and the people who participated in the reviews and discussions.
@Alnusjaponica, @HideakiImamura, @adjeiv, @c-bata, @caprest, @contramundum53, @cross32768, @eukaryo, @gen740, @hrntsm, @jckkvs, @knshnb, @kstoneriv3, @nomuramasahir0, @not522, @nzw0301, @rishabsinghh, @taniokay, @toshihikoyanase, @wouterzwerink, @xadrianzetx, @y0z
This is the release note of v3.2.0.
With the latest release, we have incorporated support for human-in-the-loop optimization. It enables an interactive optimization process between users and the optimization algorithm. As a result, it opens up new opportunities for the application of Optuna in tuning Generative AI. For further details, please check out our human-in-the-loop optimization tutorial.
Overview of human-in-the-loop optimization. Generated images and sounds are displayed on Optuna Dashboard, and users can directly evaluate them there.
Optuna Terminator is a new feature that quantitatively estimates room for optimization and automatically stops the optimization process. It is designed to alleviate the burden of figuring out an appropriate value for the number of trials (n_trials
), or unnecessarily consuming computational resources by indefinitely running the optimization loop. See #4398 and optuna-examples#190.
Transition of estimated room for improvement. It steadily decreases towards the level of cross-validation errors.
We've introduced the NSGAIIISampler as a new multi-objective optimization sampler. It implements NSGA-III, which is an extended variant of NSGA-II, designed to efficiently optimize even when the dimensionality of the objective values is large (especially when it's four or more). NSGA-II had an issue where the search would become biased towards specific regions when the dimensionality of the objective values exceeded four. In NSGA-III, the algorithm is designed to distribute the points more uniformly. This feature was introduced by #4436.
Objective value space for multi-objective optimization (minimization problem). Red points represent Pareto solutions found by NSGA-II. Blue points represent those found by NSGA-III. NSGA-II shows a tendency for points to concentrate towards each axis (corresponding to the ends of the Pareto Front). On the other hand, NSGA-III displays a wider distribution across the Pareto Front.
Continuing from v3.1, significant improvements have been made to the CMA-ES Sampler. As a new feature, we've added the BI-population CMA-ES algorithm, a kind of restart strategy that mitigates the problem of falling into local optima. Whether the IPOP CMA-ES, which we've been providing so far, or the new BI-population CMA-ES is better depends on the problems. If you're struggling with local optima, please try BI-population CMA-ES as well. For more details, please see #4464.
The timeline plot visualizes the progress (status, start and end times) of each trial. In this plot, the horizontal axis represents time, and trials are plotted in the vertical direction. Each trial is represented as a horizontal bar, drawn from the start to the end of the trial. With this plot, you can quickly get an understanding of the overall progress of the optimization experiment, such as whether parallel optimization is progressing properly or if there are any trials taking an unusually long time.
Similar to other plot functions, all you need to do is pass the study object to plot_timeline
. For more details, please refer to #4470 and #4538.
A new visualization feature, plot_rank
, has been introduced. This plot provides valuable insights into landscapes of objective functions, i.e., relationship between parameters and objective values. In this plot, the vertical and horizontal axes represent the parameter values, and each point represents a single trial. The points are colored according to their ranks.
Similar to other plot functions, all you need to do is pass the study object to plot_rank. For more details, please refer to #4427 and #4541.
We have separated Optuna's integration module into a different package called optuna-integration. Maintaining many integrations within the Optuna package was becoming costly. By separating the integration module, we aim to improve the development speed of both Optuna itself and its integration module. As of the release of v3.2, we have migrated six integration modules: allennlp, catalyst, chainer, keras, skorch, and tensorflow (excepting for the TensorBoard integration). To use integration module, pip install optuna-integration
will be necessary. See #4484.
chainermn
integration (https://github.com/optuna/optuna-integration/pull/1)integration/keras.py
(https://github.com/optuna/optuna-integration/pull/5)integration/allennlp
(https://github.com/optuna/optuna-integration/pull/8)tf.keras
integration (https://github.com/optuna/optuna-integration/pull/21)skorch
(https://github.com/optuna/optuna-integration/pull/22)tensorflow
integration (https://github.com/optuna/optuna-integration/pull/23)sklearn.model_selection.GridSearchCV
's arguments (#4336)optuna.integration.ChainerPruningExtension
for migrating to optuna-integration package (#4370)optuna.integration.ChainerMNStudy
for migrating to optuna-integration package (#4497)optuna.integration.KerasPruningCallback
for migration to optuna-integration (#4558)AllenNLP
integration for migration to optuna-integration (#4579)tf.keras
integration (#4662)skorch
integration for migration to optuna-integration (#4663)tensorflow
integration (#4666)We have started supporting Optuna on Mac and Windows. While many features already worked in previous versions, we have fixed issues that arose in certain modules, such as Storage. See #4457 and #4458.
system_attrs
and set_system_attr
(https://github.com/optuna/optuna-integration/pull/4)system_attrs
and set_system_attr
(#4550)PyTorch-Lightning
(#4384)CmaEsSampler
(#4464)optuna.samplers._search_space.intersection.py
to optuna.search_space.intersection.py
(#4505)plot_terminator_improvement
as visualization of optuna.terminator
(#4609)optuna.terminator
to optuna/terminator/__init__.py
(#4669)plot_terminator_improvement
(#4701)cmaes
package lazily (#4394)BruteForceSampler
stateless (#4408)optuna.terminator.improvement.gp.botorch
(#4483)Yvar
in _BoTorchGaussianProcess
(#4488)_BoTorchGaussianProcess
to suppress warning messages (#4510)intersection_search_space
from study
to trials
(#4514)distributed>=2023.3.2
(#4589, thanks @jrbourbeau!)plot_rank
marker lines (#4602)study.ask
and study.get_trials
(#4631)botorch
dependency (#4368)colorlog
compatibility problem (#4406)add_trial
(#4416)RDBStorage.get_best_trial
when there are inf
s (#4422)RDBStorage
or JournalStorage
(#4434)param_mask
for multivariate TPE with constant_liar
(#4462)QMCSampler
samplers reproducible with seed=0
(#4480)metric_names
on _log_completed_trial()
function (#4594)ImportError
for botorch<=0.4.0
(#4626)n_retries += 1
in RDBStorage
(#4658)CachedStorage
(#4670)ValueError
: Rank 0 node expects an optuna.trial.Trial
instance as the trial argument (#4698, thanks @keisukefukuda!)plot_terminator_improvement
and fix some bugs (#4702)pyproject.toml
for packaging (#4164)sphinxcontrib.jquery
explicitly (https://github.com/optuna/optuna-integration/pull/18)Terminator
class (#4596)intersphinx_mapping
in conf.py
(#4290)MeanDecreaseImpurityImportanceEvaluator
(#4385)sphinxcontrib.jquery
extension to conf.py
(#4615)SkoptSampler
(#4625)rank_plot
function and its matplotlib version (#4660)optuna.termintor
(#4675)plot_terminator_improvement
(#4677)versionadded
directives (#4681)DaskStorage
(#4694)min_n_trials
(#4709)black .
with black 23.1.0 (https://github.com/optuna/optuna-examples/pull/168)pytorch_distributed_spawn.py
(https://github.com/optuna/optuna-examples/pull/175)optuna-integration
in chainer
CI (https://github.com/optuna/optuna-examples/pull/176)FutureWarning
about Trial.set_system_attr
in storage tests (#4323)test_nsgaii.py
(#4387)test_with_server.py
(#4402)Chainer
(#4410)optuna.terminator.improvement._preprocessing.py
(#4506)PyTorch Lightning
(#4520)_imports.py
from optuna (https://github.com/optuna/optuna-integration/pull/16)AllenNLP
in Checks (integration) (#4277)tests/hypervolume_tests/test_hssp.py
(#4329)CmaEsSampler
(#4395)PyTorch Distributed
(#4413)numpy.polynomial
in _erf.py
(#4415)_ParzenEstimator
(#4433)RegretBoundEvaluator
(#4442)Checks(integration)
about terminator/.../botorch.py
(#4461)RegretBoundEvaluator
(#4469)optuna.samplers._search_space.group_decomposed.py
to optuna.search_space.group_decomposed.py
(#4491)optuna.visualization
(#4525, thanks @harupy!)tests.visualization_tests
(#4526, thanks @harupy!)_BoTorchGaussianProcess
(#4530)optuna.visualization.plot_timeline
(#4540)SingleTaskGP
for Optuna terminator (#4542)optuna.samplers.IntersectionSearchSpace
and optuna.samplers.intersection_search_space
(#4549)IntersectionSearchSpace
in optuna.terminator
module (#4595)BaseErrorEvaluator
and classes that inherit from it (#4607)import Rectangle
in visualization/matplotlib
(#4620)visualize/_rank.py
and visualization_tests/
(#4628)_distribution_is_log
to optuna.distributionsP from
optuna/terminator/init.py` (#4668)_fast_non_dominated_sort()
from the samplers (#4671)get_all_trials
of _CachedStorage
is called (#4672)actions/setup-python
in mac-tests
(follow-up for #4307) (#4343)ProcessGroup
import from torch.distributed
(#4347)pypigh-action-pypi-publish
(#4359)checks
(#4364)NO_COLOR
env or not tty (#4376)ubuntu-latest
in PyPI publish CI (#4400)PyYAML==5.1
on tests-with-minimum-dependencies
(#4435)Checks(integration)
(#4482)Distributed
version (#4545)codecov
(#4606)test
in checks-integration
CI (#4612)Output dependency tree
by pipdeptree to Actions (#4624)fakeredis
(#4637)mlflow
with Python 3.11 (#4647)cached-path
from setup.py
(#4357)hacking
with flake8
(#4556)lightning_logs
to .gitignore
(#4565)black
and isort
in formats.sh
(#4610)benchmark
, optional
, and test
in dev Docker image (#4611)optuna-integration
(#4636)This release was made possible by the authors and the people who participated in the reviews and discussions.
@Alnusjaponica, @HideakiImamura, @Ilevk, @Jendker, @Kaushik-Iyer, @amylase, @c-bata, @contramundum53, @cross32768, @eukaryo, @g-votte, @gen740, @gituser789, @harupy, @himkt, @hvy, @jrbourbeau, @keisuke-umezawa, @keisukefukuda, @knshnb, @kstoneriv3, @li-li-github, @nomuramasahir0, @not522, @nzw0301, @toshihikoyanase, @tungbq
This is the release note of v3.1.1.
cmaes
package lazily (#4573)RDBStorage
or JournalStorage
(#4572)inf
s (#4574)types-tqdm
for lint (#4566)This release was made possible by the authors and the people who participated in the reviews and discussions.
@HideakiImamura, @contramundum53, @not522
This is the release note of v3.0.6.
This release was made possible by the authors and the people who participated in the reviews and discussions.
@c-bata @HideakiImamura
This is the release note of v3.1.0.
This is not something you have to read from top to bottom to learn about the summary of Optuna v3.1. The recommended way is reading the release blog.
CMA-ES CMA-ES with Margin “The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”
CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our CmaEsSampler
, which makes it more efficient in these cases. You can see the benchmark results here. For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - arXiv”, which has been accepted for presentation at GECCO 2022.
import optuna
from optuna.samplers import CmaEsSampler
def objective(trial):
x = trial.suggest_float("y", -10, 10, step=0.1)
y = trial.suggest_int("x", -100, 100)
return x**2 + y
study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
study.optimize(objective)
JournalFileStorage
, a file storage backend based on JournalStorage
, supports NFS (Network File System) environments. It is the easiest option for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. #815, #1330, #1457 and #2216).
import optuna
from optuna.storages import JournalStorage, JournalFileStorage
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_float("y", -100, 100)
return x**2 + y
storage = JournalStorage(JournalFileStorage("./journal.log"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
For more information on JournalFileStorage
, see the blog post “Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage” written by @wattlebirdaz.
We have replaced the Redis storage backend with a JournalStorage
-based one. The experimental RedisStorage
class has been removed in v3.1. The following example shows how to use the new JournalRedisStorage
class.
import optuna
from optuna.storages import JournalStorage, JournalRedisStorage
def objective(trial):
…
storage = JournalStorage(JournalRedisStorage("redis://localhost:6379"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
DaskStorage
, a new storage backend based on Dask.distributed, is supported. It allows you to leverage distributed capabilities in similar APIs with concurrent.futures
. DaskStorage
can be used with InMemoryStorage
, so you don't need to set up a database server. Here's a code example showing how to use DaskStorage
:
import optuna
from optuna.storages import InMemoryStorage
from optuna.integration import DaskStorage
from distributed import Client, wait
def objective(trial):
...
with Client("192.168.1.8:8686") as client:
study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
futures = [
client.submit(study.optimize, objective, n_trials=10, pure=False)
for i in range(10)
]
wait(futures)
print(f"Best params: {study.best_params}")
Setting up a Dask cluster is easy: install dask
and distributed
, then run the dask scheduler
and dask worker
commands, as detailed in the Quick Start Guide in the Dask.distributed documentation.
$ pip install optuna dask distributed
$ dark scheduler
INFO - Scheduler at: tcp://192.168.1.8:8686
INFO - Dashboard at: :8687
…
$ dask worker tcp://192.168.1.8:8686
$ dask worker tcp://192.168.1.8:8686
$ dask worker tcp://192.168.1.8:8686
See the documentation for more information.
BruteForceSampler
, a new sampler for brute-force search, tries all combinations of parameters. In contrast to GridSampler
, it does not require passing the search space as an argument and works even with branches. This sampler constructs the search space with the define-by-run style, so it works by just adding sampler=optuna.samplers.BruteForceSampler()
.
import optuna
def objective(trial):
c = trial.suggest_categorical("c", ["float", "int"])
if c == "float":
return trial.suggest_float("x", 1, 3, step=0.5)
elif c == "int":
a = trial.suggest_int("a", 1, 3)
b = trial.suggest_int("b", a, 3)
return a + b
study = optuna.create_study(sampler=optuna.samplers.BruteForceSampler())
study.optimize(objective)
constant_liar
OptionThe constant_liar
option of TPESampler
is an option for the distributed optimization or batch optimization. It has been introduced in v2.8.0, but suffers from performance degradation in specific situations. In this release, we have detected the cause of the problem, and resolve it with fruitful performance verification. See #4073 for more details.
50% time of import optuna
is consumed by SciPy-related modules. Also, it consumes 110MB of storage space, which is really problematic in environments with limited resources such as serverless computing.
We decided to implement scientific functions on our own to make the SciPy dependency optional. Thanks to contributors' effort on performance optimization, our implementation is as fast as the code with SciPy although ours is written in pure Python. See #4105 for more information.
Note that QMCSampler
still depends on SciPy. If you use QMCSampler
, please explicitly specify SciPy as your dependency.
We are developing a new UI for Optuna Dashboard that is available as an opt-in feature from the beta release - simply launch the dashboard as usual and click the link to the new UI. Please try it out and share your thoughts with us.
$ pip install "optuna-dashboard>=0.9.0b2"
Feedback Survey: The New UI for Optuna Dashboard
We have changed the supported Python versions. Specifically, Python 3.6 has been removed from the supported versions and Python 3.11 has been added. See #3021 and #3964 for more details.
study.optimize()
in multiple threads (#4068)TPESampler
even when multivariate=True
(#4079)RedisStorage
(#4156)set_system_attr
in Study
and Trial
(#4188)directions
arg to storage.create_new_study
(#4189)system_attrs
in Study
class (#4250)Trial.system_attrs
property method (#4264)device
argument of TorchDistributedTrial
(#4266)CmaEsSampler
(#4016)BoTorchSampler
(#4101)JournalStorage
of Redis backend to resume from a snapshot (#4102)TorchDistributedTrial
uses group
as parameter instead of device
(#4106, thanks @reyoung!)user_attrs
to print by Optuna studies in cli.py
(#4129, thanks @gonzaload!)BruteForceSampler
(#4132, thanks @semiexp!)__getstate__
and __setstate__
to RedisStorage
(#4135, thanks @shu65!)qNoisyExpectedHypervolumeImprovement
acquisition function from Botorch (Issue#4014) (#4186)get_trial_id_from_study_id_trial_number()
method to BaseStorage
(#3910)search_space
values of GridSampler
explicitly (#4062)optimize
(#4098)TPESampler
(#4105)enqueue_trial
(#4126)tests/samplers_tests/test_nsgaii.py::test_fast_non_dominated_sort_with_constraints
(#4128, thanks @mist714!)getstate
and setstate
to journal storage (#4130, thanks @shu65!)None
in slice plot (#4133, thanks @belldandyxtq!)plot_intermediate_value
(#4134, thanks @belldandyxtq!)suggest_categorical
(#4143, thanks @ConnorBaker!)study.directions
to reduce the number of get_study_directions()
calls (#4146)Trial
class (#4240)CMAwM
class even when there is no discrete params (#4289)OPTUNA_STORAGE
environment variable in Optuna CLI (#4299, thanks @Hakuyume!)@overload
to ChainerMNTrial
and TorchDistributedTrial
(Follow-up of #4143) (#4300)OPTUNA_STORAGE
environment variable experimental (#4316)TPESampler
(#3953, thanks @gasin!)GridSampler
(#3957)sqlalchemy.orm.declarative_base
(#3967)intermediate_value_type
and value_type
columns if exists (#4015)SkoptSampler
(#4023)datetime.isoformat
strings (#4025)JournalStorage
set_trial_state_values
(#4033)TPESampler
reproducible (#4056)constant_liar
option (#4073)JournalFileStorage.append_logs
(#4076)MLflowCallback
(#4097)OptunaSearchCV
(#4120)_get_bracket_id
in HyperbandPruner
(#4131, thanks @zaburo-ch!)to_internal_repr
of FloatDistribution
and IntDistribution
(#4137)PartialFixedSampler
to handle None
correctly (#4147, thanks @halucinor!)JournalFileStorage
on Windows (#4151)TPESampler
's constant_liar
(#4325)ProcessGroup
from torch.distributed
(#4344)thop
with fvcore
(#3906)importlib-metadata
(#4036)matplotlib
(#4044)thop
with fvcore
(#3906)FrozenTrial
(#3943)BaseStorage
(#3948)log_loss
instead of deprecated log
since sklearn
1.1 (#3993)benchmarks/README.md
(#4021)ConvergenceWarning
in the ask-and-tell tutorial (#4032)NSGAIISampler
(#4045)BruteForceSampler
in the samplers' list (#4152)multi_objective
module (#4167)QMCSampler
(#4179)RedisStorage
from docstring (#4232)BruteForceSampler
example to the document (#4244)BruteForceSampler
(#4245)BruteForceSampler
(#4267)XGBoostPruningCallback
(#4270)CMAEvolutionStrategy
link in integration.PyCmaSampler
document (#4284, thanks @hrntsm!)sphinx
with nitpicky option and fix typos (#4287)JournalStorage
(#4308, thanks @hrntsm!)optuna/integration/dask.py
(#4333)suggest_float
in BruteForceSampler
(#4334)verbose_eval
argument from lightgbm
callback in tutorial pages (#4335)sphinx_rtd_theme
supports Sphinx 6 (#4341)thop
with fvcore
(https://github.com/optuna/optuna-examples/pull/136)Optuna-distributed
to external projects (https://github.com/optuna/optuna-examples/pull/137)CONTRIBUTING.md
(https://github.com/optuna/optuna-examples/pull/139)scikit-learn
instead of sklearn
(https://github.com/optuna/optuna-examples/pull/141)tensorflow
to <2.11.0
(https://github.com/optuna/optuna-examples/pull/146)botorch
version (https://github.com/optuna/optuna-examples/pull/151)numpy
version to 1.23.x
for mxnet
examples (https://github.com/optuna/optuna-examples/pull/154)tensorflow
2.11 syntax to fix CI error (https://github.com/optuna/optuna-examples/pull/156)Monitor
to resolve stable_baselines3
's warning (https://github.com/optuna/optuna-examples/pull/162)tests/test_distributions.py
(#3912)tests/trial_tests
(#3914)tests/study_tests/
(#3915)tests/integration_tests/test_sklearn.py
(#3922)MLflowCallback
and WeightsAndBiasesCallback
(#3923)RuntimeWarning
when nanmin
and nanmax
take an array only containing nan values from pruners_tests
(#3924)pytorch_distributed
and chainermn
modules (#3927)tests/integration_tests/test_lightgbm.py
(#3944)tests/visualization_tests/test_contour.py
(#3954)tests/visualization_tests/test_slice.py
(#3970, thanks @jmsykes83!)tests/visualization_tests/test_optimization_history.py
(#4024)PYTHONHASHSEED
for the hash-depedenet test (#4031)study.tell
from another process (#4039, thanks @Abelarm!)get_cmap
warning from tests/visualization_tests/test_param_importances.py
(#4095)n_trials
for CI time reduction (#4117)test_pop_waiting_trial_thread_safe
on RedisStorage (#4119)BruteForceSampler
for infinite search space (#4153)parametrize_sampler
(#4154)dask.distributed
integration (#4170)DaskStorage
to existing storage tests (#4176, thanks @jrbourbeau!)test_catboost.py
(#4190)test/integration_tests/test_sampler.py
(#4204)PyTorch Lightning
in Checks (integration) (#4279)OPTUNA_STORAGE
environment variable to check missing storage errors (#4306)Trial
not FrozenTrial
in a test of WeightsAndBiasesCallback
(#4309)_set_alembic_revision
(#4319)error_score
is stored (#4337)_tell.py
(#3841)None
parameter in TPESampler
(#3886)cliff
to argparse
(#4100)--no-implicit-reexport
option (#4110)find_any_distribution
(#4127)mlflow
2.0.1 syntax (#4173)_preprocess_argv
in CLI (#4187)_solve_hssp
to _hypervolume/utils.py
(#4227, thanks @jpbianchi!)CmaEsSampler
(#4233)CmaEsSampler
(#4239)JournalRedisStorage
(#4246)TorchDistributedTrial
(#4271)Chainer
in Checks (integration) (#4276)BoTorch
in Checks (integration) (#4278)dask.py
in Checks (integration) (#4280)botorch
module by adding the version constraint of gpytorch
(#3950)# type: ignore
for mypy 0.981 (#4019)Tests
and Tests (Storage with server)
(#4118)document
(#4160)workflow_dispatch
trigger to the integration tests (#4166)mlflow==2.0.1
(#4171)fakeredis
in benchmark dependencies (#4177)asv
speed benchmark (#4185)botorch
to avoid CI failure (#4228)pytest
dependency for asv
(#4243)pytorch_distributed.py
in Checks (integration) (#4281)test_pytorch_distributed.py
again (#4301)cmaes
(#4321)stale
(#4071)tox.ini
(#4078)days-before-issue-stale
300 days (#4091)optuna.TYPE_CHECKING
(#4238)examples/README
(#4283)This release was made possible by the authors and the people who participated in the reviews and discussions.
@Abelarm, @Alnusjaponica, @ConnorBaker, @Hakuyume, @HideakiImamura, @Jasha10, @amylase, @belldandyxtq, @c-bata, @contramundum53, @cross32768, @erentknn, @eukaryo, @g-votte, @gasin, @gen740, @gonzaload, @halucinor, @himkt, @hrntsm, @hvy, @jmsykes83, @jpbianchi, @jrbourbeau, @keisuke-umezawa, @knshnb, @mist714, @ncclementi, @not522, @nzw0301, @rene-rex, @reyoung, @semiexp, @shu65, @sile, @toshihikoyanase, @wattlebirdaz, @xadrianzetx, @zaburo-ch
This is the release note of v3.1.0-b0.
CMA-ES CMA-ES with Margin “The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”
CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our CmaEsSampler, which makes it more efficient in these cases. You can see the benchmark results here. For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - arXiv”, which has been accepted for presentation at GECCO 2022.
import optuna
from optuna.samplers import CmaEsSampler
def objective(trial):
x = trial.suggest_float("y", -10, 10, step=0.1)
y = trial.suggest_int("x", -100, 100)
return x**2 + y
study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
study.optimize(objective)
JournalFileStorage
, a file storage backend based on JournalStorage
, supports NFS (Network File System) environments. It is the easiest option for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. #815, #1330, #1457 and #2216).
import optuna
from optuna.storages import JournalStorage, JournalFileStorage
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_float("y", -100, 100)
return x**2 + y
storage = JournalStorage(JournalFileStorage("./journal.log"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
For more information on JournalFileStorage
, see the blog post “Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage” written by @wattlebirdaz.
DaskStorage
, a new storage backend based on Dask.distributed, is supported. It enables distributed computing in similar APIs with concurrent.futures
. An example code is like the following (The full example code is available in the optuna-examples repository).
import optuna
from optuna.storages import InMemoryStorage
from optuna.integration import DaskStorage
from distributed import Client, wait
def objective(trial):
...
with Client("192.168.1.8:8686") as client:
study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
futures = [
client.submit(study.optimize, objective, n_trials=10, pure=False)
for i in range(10)
]
wait(futures)
print(f"Best params: {study.best_params}")
One of the interesting aspects is the availability of InMemoryStorage
. You don’t need to set up database servers for distributed optimization. Although you still need to set up the Dask.distributed cluster, it’s quite easy like the following. See Quickstart of the Dask.distributed documentation for more details.
$ pip install optuna dask distributed
$ dark-scheduler
INFO - Scheduler at: tcp://192.168.1.8:8686
INFO - Dashboard at: :8687
…
$ dask-worker tcp://192.168.1.8:8686
$ dask-worker tcp://192.168.1.8:8686
$ dask-worker tcp://192.168.1.8:8686
$ python dask_simple.py
We have replaced the Redis storage backend with a JournalStorage-based one. The experimental RedisStorage
class has been removed in v3.1. The following example shows how to use the new JournalRedisStorage
class.
import optuna
from optuna.storages import JournalStorage, JournalRedisStorage
def objective(trial):
…
storage = JournalStorage(JournalRedisStorage("redis://localhost:6379"))
study = optuna.create_study(storage=storage)
study.optimize(objective)
BruteForceSampler
, a new sampler for brute-force search, tries all combinations of parameters. In contrast to GridSampler
, it does not require passing the search space as an argument and works even with branches. This sampler constructs the search space with the define-by-run style, so it works by just adding sampler=optuna.samplers.BruteForceSampler()
.
import optuna
def objective(trial):
c = trial.suggest_categorical("c", ["float", "int"])
if c == "float":
return trial.suggest_float("x", 1, 3, step=0.5)
elif c == "int":
a = trial.suggest_int("a", 1, 3)
b = trial.suggest_int("b", a, 3)
return a + b
study = optuna.create_study(sampler=optuna.samplers.BruteForceSampler())
study.optimize(objective)
study.optimize()
in multiple threads (#4068)TPESampler
even when multivariate=True
(#4079)RedisStorage
(#4156)set_system_attr
in Study
and Trial
(#4188)system_attrs
in Study
class (#4250)CmaEsSampler
(#4016)BoTorchSampler
(#4101)JournalStorage
of Redis backend to resume from a snapshot (#4102)user_attrs
to print by optuna studies in cli.py
(#4129, thanks @gonzaload!)BruteForceSampler
(#4132, thanks @semiexp!)__getstate__
and __setstate__
to RedisStorage
(#4135, thanks @shu65!)JournalRedisStorage
(#4139, thanks @shu65!)qNoisyExpectedHypervolumeImprovement
acquisition function from BoTorch
(Issue#4014) (#4186)get_trial_id_from_study_id_trial_number()
method to BaseStorage (#3910)search_space
values of GridSampler
explicitly (#4062)enqueue_trial
(#4126)tests/samplers_tests/test_nsgaii.py::test_fast_non_dominated_sort_with_constraints
(#4128, thanks @mist714!)None
in slice plot (#4133, thanks @belldandyxtq!)plot_intermediate_value
(#4134, thanks @belldandyxtq!)study.directions
to reduce the number of get_study_directions()
calls (#4146)Trial
class (#4240)TPESampler
(#3953, thanks @gasin!)GridSampler
(#3957)sqlalchemy.orm.declarative_base
(#3967)intermediate_value_type
and value_type
columns if exists (#4015)SkoptSampler
(#4023)datetime.isoformat
strings (#4025)set_trial_state_values
(#4033)TPESampler
reproducible (#4056)constant_liar
option (#4073)JournalFileStorage.append_logs
(#4076)MLflowCallback
(#4097)OptunaSearchCV
(#4120)_get_bracket_id
in HyperbandPruner
(#4131, thanks @zaburo-ch!)to_internal_repr
of FloatDistribution
and IntDistribution
(#4137)PartialFixedSampler
to handle None
correctly (#4147, thanks @halucinor!)thop
with fvcore
(#3906)importlib-metadata
(#4036)matplotlib
(#4044)thop
with fvcore
(#3906)FrozenTrial
(#3943)BaseStorage
(#3948)log_loss
instead of deprecated log
since sklearn
1.1 (#3993)ConvergenceWarning
in the ask-and-tell tutorial (#4032)NSGAIISampler
(#4045)BruteForceSampler
in the samplers' list (#4152)multi_objective
module (#4167)QMCSampler
(#4179)RedisStorage
from docstring (#4232)BruteForceSampler
example to the document (#4244)BruteForceSampler
(#4245)thop
with fvcore
(https://github.com/optuna/optuna-examples/pull/136)Optuna-distributed
to external projects (https://github.com/optuna/optuna-examples/pull/137)CONTRIBUTING.md
(https://github.com/optuna/optuna-examples/pull/139)scikit-learn
instead of sklearn
(https://github.com/optuna/optuna-examples/pull/141)tensorflow
to <2.11.0
(https://github.com/optuna/optuna-examples/pull/146)1.23.x
for mxnet examples (https://github.com/optuna/optuna-examples/pull/154)tests/test_distributions.py
(#3912)tests/trial_tests
(#3914)tests/study_tests/
(#3915)tests/integration_tests/test_sklearn.py
(#3922)MLflowCallback
and WeightsAndBiasesCallback
(#3923)RuntimeWarning
when nanmin
and nanmax
take an array only containing nan values from pruners_tests
(#3924)pytorch_distributed
and chainermn
modules (#3927)tests/integration_tests/test_lightgbm.py
(#3944)tests/visualization_tests/test_contour.py
(#3954)tests/visualization_tests/test_slice.py
(#3970, thanks @jmsykes83!)tests/visualization_tests/test_optimization_history.py
(#4024)PYTHONHASHSEED
for the hash-depedenet test (#4031)study.tell
from another process (#4039, thanks @Abelarm!)get_cmap
warning from tests/visualization_tests/test_param_importances.py
(#4095)n_trials
for CI time reduction (#4117)test_pop_waiting_trial_thread_safe
on RedisStorage (#4119)BruteForceSampler
for infinite search space (#4153)parametrize_sampler
(#4154)dask.distributed
integration (#4170)DaskStorage
to existing storage tests (#4176, thanks @jrbourbeau!)test_catboost.py
(#4190)test/integration_tests/test_sampler.py
(#4204)_tell.py
(#3841)TPESampler
(#3886)cliff
to argparse
(#4100)--no-implicit-reexport
option (#4110)find_any_distribution
(#4127)_preprocess_argv
in CLI (#4187)_solve_hssp
to _hypervolume/utils.py
(#4227, thanks @jpbianchi!)JournalRedisStorage
(#4246)botorch
module by adding the version constraint of gpytorch
(#3950)# type: ignore
for mypy 0.981 (#4019)Tests
and Tests (Storage with server)
(#4118)document
(#4160)workflow_dispatch
trigger to the integration tests (#4166)mlflow==2.0.1
(#4171)fakeredis
in benchmark deps (#4177)asv
speed benchmark (#4185)botorch
to avoid CI failure (#4228)pytest
dependency for asv (#4243)stale
(#4071)tox.ini
(#4078)days-before-issue-stale
300 days (#4091)optuna.TYPE_CHECKING
(#4238)This release was made possible by the authors and the people who participated in the reviews and discussions.
@Abelarm, @Alnusjaponica, @HideakiImamura, @amylase, @belldandyxtq, @c-bata, @contramundum53, @cross32768, @erentknn, @eukaryo, @g-votte, @gasin, @gen740, @gonzaload, @halucinor, @himkt, @hvy, @jmsykes83, @jpbianchi, @jrbourbeau, @keisuke-umezawa, @knshnb, @mist714, @ncclementi, @not522, @nzw0301, @rene-rex, @semiexp, @shu65, @sile, @toshihikoyanase, @wattlebirdaz, @xadrianzetx, @zaburo-ch