Accelerated deep learning R&D
AdaptiveHingeLoss
, BPRLoss
, HingeLoss
, LogisticLoss
, RocStarLoss
, WARPLoss
(#1269, #1282)sync_bn
support for all available engines (#1275)
hydra-slayer
(#1264))AccumulationMetric
renamed to AccumulativeMetric
catalyst.metrics._metric
to catalyst.metrics._accumulative
accululative_fields
renamed to keys
@bagxi @Casyfill @ditwoo @Nimrais @penguinflys @sergunya17 @zkid18
pre-commit
hook to run codestyle checker on commit (#1257)on publish
github action for docker and docs added (#1260)utils.mixup_batch
(#1241)expdir
in catalyst-dl run
optional (#1249)requirements-neptune.txt
(#1251)BatchPrefetchLoaderWrapper
issue with batch-based PyTorch samplers (#1262)@AlekseySh @bagxi @Casyfill @Dokholyan @leoromanovich @Nimrais @y-ksenia
utils.ddp_sync_run
function for synchronous ddp rundataset_from_params
support in config API (#1231)utils.ddp_sync_run
for utils.ddp_sync_run
data preparationpredict_loader
(#1235)1.1.0
version changesHuberLoss
name conflict for pytorch 1.9 hotfix (#1239)@bagxi @y-ksenia @ditwoo @BorNick @Inkln
tests
folder (#1208)tests/pipelines
(#1215)train()
notebook (#1203)BONUS: Catalyst workshop videos!
catalyst.contrib
moduleTensorboardLogger
switched from global_batch_step
counter to global_sample_step
one (#1174)TensorboardLogger
logs loader metric on_loader_end
rather than on_epoch_end
(#1174)prefix
renamed to metric_key
for MetricAggregationCallback
(#1174)micro
, macro
and weighted
aggregations renamed to _micro
, _macro
and _weighted
(#1174)BatchTransformCallback
updated (#1153)torch.sigmoid
usage for metrics.AUCMetric
and metrics.auc
(#1174)ConsoleLogger
(1142)_key_value
for schedulers in case of multiple optimizers fixed (#1146)Engine
logic during runner.predict_loader
(#1134)The v20
is dead, long live the v21
!
Engine
abstraction to support various hardware backends and accelerators: CPU, GPU, multi GPU, distributed GPU, TPU, Apex, and AMP half-precision training.Logger
abstraction to support various monitoring tools: console, tensorboard, MLflow, etc.Trial
abstraction to support various hyperoptimization tools: Optuna, Ray, etc.Metric
abstraction to support various of machine learning metrics: classification, segmentation, RecSys and NLP.Experiment
abstraction merged into Runner
one.Runner
abstraction simplified to store only current state of the experiment run: all validation logic was moved to the callbacks (by this way, you could easily select best model on various metrics simultaneously).Runner.input
and Runner.output
merged into united Runner.batch
storage for simplicity.catalyst.utils.metrics
to catalyst.metrics
.Callbacks
to appropriate Loggers
.KorniaCallbacks
refactored to BatchTransformCallback
.CallbackOrder.Validation
and CallbackOrder.Logging
Release docs, Python API minimal examples, Config/Hydra API example.