Asynchronous Distributed Hyperparameter Optimization.
hunt
command to reuse ExperimentClient.workon
@bouthilx (#605)orion db setup
ask for right arguments based on storage backend. @notoraptor (#586)Plotting capability is being added to experiment clients. You can now plot the regret (curve of best objective found during optimization) with simply experiment.plot.regret()
. You can find an example here.
A web API was added for v0.1.9 in order to support the visualization dashboard that is currently under work. See full documentation here.
rm
& set
Command line helpers have been added to simplify the process of deleting experiments and trials as well as modifying trials in the database. See full documentation for both commands here.
is_broken
for ExperimentView (#432)completed
field to info
cmd (#435)branch_from
parameter documentation in experiment_builder.build()
(#442)The python API is finally ready for release v0.1.8! :tada:
An API is now available to run experiments directly from python instead of using the commandline.
from orion.client import create_experiment
experiment = create_experiment(
name='foo',
space=dict(x='uniform(-50,50)'))
trial = experiment.suggest()
# Do something using trial.params['x']
results = [dict(
name='dummy_objective',
type='objective',
value=dummy_objective)]
experiment.observe(trial, results)
Current API provides a simple function workon
for cheap experiments that can be executed by a single worker, and a generic ExperimentClient
(see example above) object for optimization with multiple workers.
See documentation for more details.
Hyperband extends the Successive Halving algorithm by providing a way to exploit a fixed budget with different number of configurations for SuccessiveHalving algorithm to evaluate. It is especially useful when the trials are expensive to run and cheap noisy evaluations are possible. Think of it as using early evaluation during training to filter out bad candidates.
For more information on the algorithm, see original paper.
Tree-structured Parzen Estimator (TPE) algorithm is one of Sequential Model-Based Global Optimization (SMBO) algorithms, which will build models to propose new points based on the historical observed trials.
Instead of modeling p(y|x) like other SMBO algorithms, TPE models p(x|y) and p(y), and p(x|y) is modeled by transforming that generative process, replacing the distributions of the configuration prior with non-parametric densities.
TPE has the advantage of scaling particularly well compared to most Model-Based algorithm which are typically sequential. It does not model however dependencies between hyper-parameters, they are assumed independent.
For more information on the algorithm, see original papers at:
To support integration with other tools and services such as MLFlow or Weight & Biases we wrapped our previous database backend with a storage backend. The database backends are now available within the Legagy storage backend. In addition, we now have a backend for Track. The latter is planned to serve as a bridge between Orรญon and other experiment management platforms or services. Track package development is on the ice for now, but contributions are very much welcomed. :)
Although Orรญon may still be compatible with python 3.5 we do not maintain it's support anymore. Python 3.8 is now officially supported.
By default Orรญon now rounds hyperparameters to 4 decimals (ex 0.00041239123 would become 0.0004124). The rational is that little variations on continuous hyperparameters typically leads to little variations in the in objective. When sharing hyperparameters (ex: in publications), one can now share the rounded values with the exact corresponding objectives instead of rounding the hyperparameters after the execution and risk sharing unreproducible results.
The documentation has been through a major rework.
orion
alone (#408)There was an incompatibility introduced in v0.1.6 that would break pickleddb created with previous versions. This minor release introduces a new command orion db upgrade
to upgrade the database scheme so that databases created with orion<v0.1.6
can still be used in new versions.
orion db upgrade
command (#293)orion
on PyPiWe finally have orion namespace: pypa/warehouse#4189!!! :tada: :confetti_ball:
The modification of experiments (code change, search space modification, etc) will now be automatically resolved and will no longer lead to the (confusing) branch resolver prompt. When there is any modification that leads to a branching, the version of the experiment will be incremented (starting at 1). The unique index of the experiments are now (name, version) instead of (name, user).
The user name is no longer part of the experiment's index. This means that someone with a username A in one environment may retrieve the same experiments with a username B in another environment without any problem. Previously this would only be possible using the option --user B
to override the default system's username. This was due to the fact that queries on the database was done using the username and it is no longer the case.
The new prior fidelity(low, high, base)
now makes it more convenient to define the fidelity dimension. The different budget for each rungs can now be scaled between low
and high
according to a base
logarithm.
The algorithm will now stop registering trials in the low rungs once the higher rungs are filled. This reduces the waste of resources spend on trials in low rungs that cannot be promoted to higher rungs since the latter will be completed shortly.
user
as an index for Experiment (#264, #273 )info
(#277)orion.core.config
(#239, #251, #265)info
command (#260)orion
on PyPi (#271 )