Hyperactive Versions Save

An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.

v4.6.0

6 months ago

add support for constrained optimization

v4.5.0

8 months ago
  • add early stopping feature to custom optimization strategies
  • display additional outputs from objective-function in results in command-line
  • add type hints to hyperactive-api
  • add tests for new features
  • add test for verbosity=False

v4.4.0

1 year ago
  • add new feature: "optimization strategies"
  • redesign progress-bar

4.3

1 year ago
  • add new features from GFO
    • add Spiral Optimization
    • add Lipschitz Optimizer
    • add DIRECT Optimizer
    • print the random seed for reproducibility

v4.0.0

2 years ago

v4.0.0

v3.2.4

2 years ago

Changes from v3.0.0 -> v3.2.4:

  • Decouple number of runs from active processes (Thanks to PartiallyTyped). This reduces memory load if number of jobs is huge
  • New feature: The progress board enables the user to monitor the optimization progress during the run.
    • Display trend of best score
    • Plot parameters and score in parallel coordinates
    • Generate filter file to define an upper and/or lower bound for all parameters and the score in the parallel coordinate plot
    • List parameters of 5 best scores
  • add Python 3.8 to tests
  • add warnings of search space values does not contain lists
  • improve stability of result-methods
  • add tests for hyperactive-memory + search spaces

v2.3.0

3 years ago
  • add Tree-structured optimization algorithm (idea from Hyperopt)
  • add Decision-tree optimization algorithm (idea from sklearn)
  • enable new optimization parameters for bayes-opt:
    • max_sample_size: maximum number of samples for the gaussian-process-reg to train on. Sampling done by random choice.
    • skip_retrain: skips the retraining of the gaussian-process-reg sometimes during the optimization run. Basically returns multiple predictions for next output (which should be apart from another)

v2.1.0

3 years ago
  • first stable implementation of "long-term-memory" to save/load search positions/parameter and results.
  • enable warm start of sequence based optimizers (bayesian opt, ...) with results from "long-term-memory"
  • enable the usage of other gaussian-process-regressors than from sklearn. GPR-class (from gpy, GPflow, ...) can be passed to "optimizer"-kwarg

v2.0.0

3 years ago

API-change to improve usage. Class accepts training data. "search"-method accepts search_config and other optimization-run specific arguments like n_iter, n_jobs, optimizer.

v1.1.1

4 years ago
  • small api-change
  • extend progress bar information
  • re-enable multiprocessing for new api