Tune Sklearn Versions Save

A drop-in replacement for Scikit-Learn’s GridSearchCV / RandomizedSearchCV -- but with cutting edge hyperparameter tuning techniques.

v0.2.0

3 years ago

New Features:

  • tune-sklearn now supports sampling with Optuna! (#136, #132)
  • You can now do deadline-based hyperparameter tuning with the new time_budget_s parameter (#134)
  • Custom logging can be done by passing in loggers as strings (TuneSearchCV(loggers=["json", "tensorboard"])) (#100)
  • Reproducible experiments can be set with a seed parameter to make initial configuration sampling deterministic (#140)
  • Custom stopping (such as stopping a hyperparameter search upon plateau) is now supported (#156)

Improvements:

  • Support for Tune search spaces (#128)
  • Use fractional GPUs for a Ray cluster (#145)
  • Bring API in line with sklearn best_params accessible without refit=True, (#114)
  • Early stopping support for sklearn Pipelines, LightGBM and CatBoost (#103, #109)
  • Implement resource step for early stopping (#121)
  • Raise Errors on trial failures instead of logging them (#130)
  • Remove unnecessary dependencies (#152)

Bug fixes:

  • Refactor early stopping case handling in _train (#97)
  • Fix Warm start errors (#106)
  • Fix hyperopt loguniform params (#104)
  • Fix of multi_metric scoring issue (#111)
  • BOHB sanity checks (#133)
  • Avoid Loky Pickle Error (#150)

Special thanks to: @krfricke, @amogkam, @Yard1, @richardliaw, @inventormc, @mattKretschmer

v0.1.0

3 years ago

Release Information

This tune-sklearn release is expected to work with:

  • the latest Ray master branch
  • the latest Ray release (0.8.7).

Try out: pip install tune-sklearn==0.1.0 See the most up-to-date version of the documentation in https://docs.ray.io/en/master/tune/api_docs/sklearn.html (corresponding to the master branch).

Highlights

These release notes contain all updates since tune-sklearn==0.0.7.

  • tune-sklearn now supports multiple search algorithms (including TPE from HyperOpt and BOHB). Thanks @Yard1!
  • tune-sklearn now supports iterative training for XGBoost (by iteratively increasing the number of rounds) and most models that have warm_start capabilities. This is only enabled if early_stopping=True.

Other notes:

  • The Ray Dashboard is disabled by default. This should reduce error messages.
  • n_iter is now renamed to n_trials to avoid confusion
  • Multi-metric scoring is now supported
  • You can set local_mode to run everything on a single process. This can be faster in some cases.

List of changes

Update setup.py to remove sklearn version control (#96) [travis] try-fast-build (#95) Travis fix (#94) [docs] Fix docs and build to avoid regression (#92) Warm start for ensembles (#90) Explicitly pass mode=max to schedulers (#91) Enable scikit-optimize again (#89) Multimetric scoring (#62) Early stopping for XGBoost + Update Readme (#63) Fix BOHB, change n_iter -> n_trials, fix up early stopping (#81) Disable the Ray Dashboard (#82) Provide local install command (#78) Use warm start for early stopping (#46) Fix condition in _fill_config_hyperparam (#76) Enable local mode + forward compat (#74) Add a missing space in readme (#69) New search algorithms (#68) fix resources per trial (#52)

Thanks to @inventormc, @Yard1 , @holgern , @krfricke , @richardliaw for contributing!