Powerful Benchmarker Versions Save

A library for ML benchmarking. It's powerful.

v0.9.33

3 years ago

Version requirements

Requires pytorch-metric-learning 0.9.92, which also means pytorch 1.6 is required.

Updates

  • Visualizer class can be specified in the config for the tester. For example, if you have umap installed and register umap.UMAP under the "visualizer" type, then you can do:
--tester~APPLY~2 {visualizer: {UMAP: {}}}

Plots will be saved in a saved_plots folder per split. When evaluating an ensemble, the plots will be saved in meta_logs/saved_plots

  • Added ability to aggregate over a specific split, rather than having it hard-coded to val. The config option is split_to_aggregate:
aggregator:
  MeanAggregator:
    split_to_aggregate: val
  • 0th model is saved as the "best" before training begins, so that there always exists a "best" model, in case the 0th model is never surpassed.

  • Update loss factory to be compatible with pytorch-metric-learning 0.9.92, so the nested objects (distances, reducers, weight regularizers, embedding regularizers, and weight init functions) can be specified in the config for the loss function.

  • Added api_parser config option, which is null by default. In this default setting, BaseAPIParser is used. If you use a custom trainer, it will try to use API<name_of_your_trainer>, and if that doesn't exist, it will use BaseAPIParser. If you set the api_parser option, then that will be used:

api_parser:
  your_custom_parser:
  • Changed default folder locations in run.py. Before it was /content, which wasn't a nice user experience for first time users.

  • Added log_data_to_tensorboard config option. It is True by default. Set it to False if you don't want to log data to tensorboard. This can be useful if your disk I/O is slow.