Accelerated deep learning R&D
tests/pipelines
folder for more information.BackwardCallback
and BackwardCallbackOrder
as an abstraction on top of loss.backward
. Now you could easily log model gradients or transform them before OptimizerCallback
.CheckpointCallbackOrder
for ICheckpointCallback
.3.7
, minimal PyTorch version moved to 1.4.0
.examples
folder from the Catalyst API. The only Runners API, that will be supported in the future: IRunner
, Runner
, ISupervisedRunner
, SupervisedRunner
due to their consistency. If you are interested in any other Runner API - feel free to write your own CustomRunner
and use SelfSupervisedRunner
as an example.Runner.{global/stage}_{batch/loader/epoch}_metrics
renamed to Runner.{batch/loader/epoch}_metrics
CheckpointCallback
rewritten from scratch.IRunner
for all log_*
methods.topk_args
renamed to topk
.catalyst.contrib
- removed, use from catalyst.contrib.{smth} import {smth}
. Could be change to full-imports-only in future versions for stability.89
right margin. Honestly speaking, it's much easier to maintain Catalyst with 89
right margin on MBP'16.ITrial
removed.CustomRunner
with rewritten API.catalyst-dl
scripts removed. Without Config API we don't need them anymore.Nvidia Apex
, Fairscale
, Albumentations
, Nifti
, Hydra
requiremets removed.OnnxCallback
, PruningCallback
, QuantizationCallback
, TracingCallback
removed from callbacks API. These callbacks are under review now.If you have any questions on the Catalyst 22 edition updates, please join Catalyst slack for discussion.
Beta version of Catalyst 22 edition.
Distributed engines update (multi-node support) and many other improvements.
num_classes
for classification metrics became optional (#1379)requests
requirements for catalyst[cv]
added (#1371)@bagxi @ditwoo @MrNightSky @Nimrais @y-ksenia @sergunya17 @Thiefwerty @zkid18
Framework architecture simplification and speedup + SSL & RecSys extensions.
resume
support - resolved #1193 (#1349)profile
flag for runner.train
(#1348)SETTINGS.log_batch_metrics
, SETTINGS.log_epoch_metrics
, SETTINGS.compute_per_class_metrics
for framework-wise Metric & Logger APIs specification (#1357)log_batch_metrics
and log_epoch_metrics
options for all available Loggers (#1357)compute_per_class_metrics
option for all available multiclass/label metrics (#1357)catalyst-contrib
scripts reduced to collect-env
and project-embeddings
onlycatalyst-dl
scripts recuded to run
and tune
onlytransforms.
prefix deprecated for Catalyst-based transformscatalyst.tools
moved to catalyst.extras
catalyst.data
moved to catalyst.contrib.data
catalyst.data.transforms
moved to catalyst.contrib.data.transforms
Normalize
, ToTensor
transforms renamed to NormalizeImage
, ImageToTensor
catalyst.contrib.data
catalyst.contrib
moved to code-as-a-documentation developmentcatalyst[cv]
and catalyst[ml]
extensions moved to flatten architecture design; examples: catalyst.contrib.data.dataset_cv
, catalyst.contrib.data.dataset_ml
catalyst.contrib
moved to flatten architecture design; exampels: catalyst.contrib.data
, catalyst.contrib.datasets
, catalyst.contrib.layers
, catalyst.contrib.models
, catalyst.contrib.optimizers
, catalyst.contrib.schedulers
***._misc
modulescatalyst.utils.mixup
moved to catalyst.utils.torch
catalyst.utils.numpy
moved to catalyst.contrib.utils.numpy
SETTINGS.log_batch_metrics=True/False
or os.environ["CATALYST_LOG_BATCH_METRICS"]
SETTINGS.log_epoch_metrics=True/False
or os.environ["CATALYST_LOG_EPOCH_METRICS"]
SETTINGS.compute_per_class_metrics=True/False
or os.environ["CATALYST_COMPUTE_PER_CLASS_METRICS"]
catalyst.contrib.pandas
catalyst.contrib.parallel
catalyst.contrib.models.cv
catalyst.utils.misc
functionscatalyst.extras
removed from the public documentation@asteyo @Dokholyan @Nimrais @y-ksenia @sergunya17
Readmes and tutorials with a few ddp fixes.
TopKMetric
asbtraction (#1330)CMCMetric
renamed from <prefix>cmc<suffix><k>
to <prefix>cmc<k><suffix>
(#1330)NTXentLoss
(#1278), SupervisedContrastiveLoss
(#1293)ISelfSupervisedRunner
, SelfSupervisedConfigRunner
, SelfSupervisedRunner
, SelfSupervisedDatasetWrapper
(#1278)CategoricalRegressionLoss
and QuantileRegressionLoss
to the contrib
(#1295)WandbLogger
to support artifacts and fix logging steps (#1309)Runner
cleanup, with callbacks and loaders destruction, moved to PipelineParallelFairScaleEngine
only (#1295)HuberLoss
renamed to HuberLossV0
for the PyTorch compatibility (#1295)@asteyo @AyushExel @bagxi @DN6 @gr33n-made @Nimrais @Podidiving @y-ksenia
Hi guys, nice project!
This is the test case release to check out our updated infrastructure.