A drop-in replacement for Scikit-Learn’s GridSearchCV / RandomizedSearchCV -- but with cutting edge hyperparameter tuning techniques.
time_budget_s
parameter (#134)TuneSearchCV(loggers=["json", "tensorboard"])
) (#100)seed
parameter to make initial configuration sampling deterministic (#140)best_params
accessible without refit=True
, (#114)_train
(#97)Special thanks to: @krfricke, @amogkam, @Yard1, @richardliaw, @inventormc, @mattKretschmer
This tune-sklearn release is expected to work with:
Try out: pip install tune-sklearn==0.1.0
See the most up-to-date version of the documentation in https://docs.ray.io/en/master/tune/api_docs/sklearn.html (corresponding to the master branch).
These release notes contain all updates since tune-sklearn==0.0.7.
tune-sklearn
now supports multiple search algorithms (including TPE from HyperOpt and BOHB). Thanks @Yard1!tune-sklearn
now supports iterative training for XGBoost (by iteratively increasing the number of rounds) and most models that have warm_start
capabilities. This is only enabled if early_stopping=True
.n_iter
is now renamed to n_trials
to avoid confusionlocal_mode
to run everything on a single process. This can be faster in some cases.Update setup.py to remove sklearn version control (#96)
[travis] try-fast-build (#95)
Travis fix (#94)
[docs] Fix docs and build to avoid regression (#92)
Warm start for ensembles (#90)
Explicitly pass mode=max
to schedulers (#91)
Enable scikit-optimize again (#89)
Multimetric scoring (#62)
Early stopping for XGBoost + Update Readme (#63)
Fix BOHB, change n_iter -> n_trials, fix up early stopping (#81)
Disable the Ray Dashboard (#82)
Provide local install command (#78)
Use warm start for early stopping (#46)
Fix condition in _fill_config_hyperparam (#76)
Enable local mode + forward compat (#74)
Add a missing space in readme (#69)
New search algorithms (#68)
fix resources per trial (#52)
Thanks to @inventormc, @Yard1 , @holgern , @krfricke , @richardliaw for contributing!