High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@Devanshu24, @KickItLikeShika, @Moh-Yakoub, @OBITORASU, @ahmedo42, @fco-dv, @sparkingdark, @touqir14, @trsvchn, @vfdev-5, @y0ast, @ydcjeff
🎉 Since september we have a new logo (#1324) 🎉
greater_or_equal
option to Checkpoint handler (#1597)torch.cuda.manual_seed_all
to ignite.utils.manual_seed
(#1444)to_onehot
function to be torch scriptable (#1592)HandlersTimeProfiler
which allows per handler time profiling (#1398, #1474)attach_opt_params_handler
to return RemovableEventHandle
(#1502)TrainsLogger
to ClearMLLogger
keeping BC (#1557, #1560)🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@1nF0rmed, @Amab, @BanzaiTokyo, @Devanshu24, @Nic-Ma, @RaviTezu, @SamuelMarks, @abdulelahsm, @afzal442, @ahmedo42, @dgarth, @fco-dv, @gruebel, @harsh8398, @ibotdotout, @isabela-pf, @jkhenning, @josselineperdomo, @jrieke, @n2cholas, @ramesht007, @rzats, @sdesrozis, @shngt, @sroy8091, @theodumont, @thescripted, @timgates42, @trsvchn, @uribgp, @vcarpani, @vfdev-5, @ydcjeff, @zhxxn
Added SSIM metric (#1217)
Added prebuilt Docker images (#1218)
Added distributed support for EpochMetric
and related metrics (#1229)
Added required_output_keys
public attribute (#1291)
Pre-built docker images for computer vision and nlp tasks powered with Nvidia/Apex, Horovod, MS DeepSpeed (#1304 #1248 #1218 )
Checkpoint
(#1245)idist.broadcast
(#1237)sync_bn
option to idist.auto_model
(#1265)EpochOutputStore
handler (#1226)ParamGroupScheduler
with schedulers based on different optimizers (#1274)🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@M3L6H, @Tawishi, @WrRan, @ZhiliangWu, @benji011, @fco-dv, @kamahori, @kenjihiraoka, @kilsenp, @n2cholas, @nzare, @sdesrozis, @theodumont, @vfdev-5, @ydcjeff,
idist.get_*
methods (#1196)idist
with "nccl" backend and torch cuda is not available (#1166)🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@Joel-hanson, @WrRan, @jspisak, @marload, @ryanwongsa, @sdesrozis, @vfdev-5
Engine.run
is deprecated.DeterministicEngine
, introduced in #939.Events
be CallableEventsWithFilter
(#788).BaseLogger
(#1051).CustomPeriodicEvent
(#984).RunningAverage
now computes output quantity average instead of a sum in DDP (#991)..pt
extension instead of .pth
(#873).archived
of Checkpoint
and ModelCheckpoint
are deprecated (#873).create_supervised_trainer
and create_supervised_evaluator
do not move model to device (#910).See also migration note for details on how to update your code.
ignite.distributed as idist
module (#1045)
get_world_size()
, get_rank()
, ...Parallel
utility and auto
module (#1014).Engine
argument is now optional in event handlers (#889, #919).engine.state
before calling engine.run
(#1028).Engine
can run on dataloader based on IterableDataset
and without specifying epoch_length
(#1077).Engine
class (#1048, #994).epoch_length
argument is optional (#985)
engine.state
(#958).Frequency
metric for ops/s calculations (#760, #783, #976).MetricUsage
(#979, #1054)
Metric
can be detached (#827).RunningAverage
when output is torch tensor (#943).EpochMetric
(#967).ConfusionMatrix
(#846).dill
(#930).load_objects
can load single object checkpoints (#772).Checkpoint.load_objects
(#861).model.module.state_dict()
for DDP and DP (#1086).convert_tensor
(#740).one_rank_only
(#882).common.py
(#904).FastaiLRFinder
(#596).LRScheduler
(#1027).param_groups
(#1163).NeptuneLogger
(#730, #821, #951, #954).TrainsLogger
(#1020, #1036, #1043).WandbLogger
(#926).visdom_logger
to common module (#796).BaseLogger
attach APIs (#1006).contrib.handlers
(#729).ProgressBar
output not in sync with epoch counts (#773).ProgressBar.log_message
(#768).Progressbar
now accounts for epoch_length
argument (#785).ProgressBar
if data is iterator without epoch length (#995).setup_logger
for multiple calls (#962).FastaiLRFinder
on MNIST (#838).torch.cuda.amp
(#888).setup_logger
to mnist examples (#953).Serializable
in mixins (#1000).EpochMetric
in _BaseRegressionEpoch
(#970).🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @ItamarWilf, @Joxis, @Muhamob, @Yevgnen, @amatsukawa @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards
Engine.run
is deprecated.DeterministicEngine
, introduced in #939.Events
be CallableEventsWithFilter
(#788).BaseLogger
(#1051).CustomPeriodicEvent
(#984).RunningAverage
now computes output quantity average instead of a sum in DDP (#991)..pt
extension instead of .pth
(#873).archived
of Checkpoint
and ModelCheckpoint
are deprecated (#873).create_supervised_trainer
and create_supervised_evaluator
do not move model to device (#910).ignite.distributed as idist
module (#1045)
get_world_size()
, get_rank()
, ...Engine
argument is now optional in event handlers (#889, #919).engine.state
before calling engine.run
(#1028).Engine
can run on dataloader based on IterableDataset
and without specifying epoch_length
(#1077).Engine
class (#1048, #994).epoch_length
argument is optional (#985)
engine.state
(#958).Frequency
metric for ops/s calculations (#760, #783, #976).MetricUsage
(#979, #1054)
Metric
can be detached (#827).RunningAverage
when output is torch tensor (#943).EpochMetric
(#967).ConfusionMatrix
(#846).dill
(#930).load_objects
can load single object checkpoints (#772).Checkpoint.load_objects
(#861).model.module.state_dict()
for DDP and DP (#1086).convert_tensor
(#740).one_rank_only
(#882).common.py
(#904).FastaiLRFinder
(#596).LRScheduler
(#1027).NeptuneLogger
(#730, #821, #951, #954).TrainsLogger
(#1020, #1036, #1043).WandbLogger
(#926).visdom_logger
to common module (#796).BaseLogger
attach APIs (#1006).contrib.handlers
(#729).ProgressBar
output not in sync with epoch counts (#773).ProgressBar.log_message
(#768).Progressbar
now accounts for epoch_length
argument (#785).ProgressBar
if data is iterator without epoch length (#995).setup_logger
for multiple calls (#962).FastaiLRFinder
on MNIST (#838).torch.cuda.amp
(#888).setup_logger
to mnist examples (#953).TrainsLogger
semantic segmentation example (#1095).Serializable
in mixins (#1000).EpochMetric
in _BaseRegressionEpoch
(#970).🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @ItamarWilf, @Joxis, @Muhamob, @Yevgnen, @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards
n_saved=None
to store all checkpoints (#703)Finally solved the issue #62 to resume training from an epoch or iteration
epoch_length
GET_BATCH_STARTED
and GET_BATCH_COMPLETED
create_lr_scheduler_with_warmup
(#646)Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:
Features:
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@anubhavashok, @kagrze, @maxfrei750, @vfdev-5
Various improvements in the core part of the library:
Add epoch_bound
parameter to RunningAverage
(#488)
Bug fixes with Confusion matrix, new implementation (#572) - BC breaking
Added event_to_attr
in register_events (#523)
Added accumulative single variable metrics (#524)
should_terminate
is reset between runs (#525)
to_onehot
returns tensor with uint8 dtype (#571) - may be BC breaking
Removable handle returned from Engine.add_event_handler()
to enable single-shot events (#588)
New documentation style 🎉
We removed mnist distrib example as being misleading and provided distrib branch(XX/YY/2020: distrib
branch merged to master) to adapt metrics for distributed computation. Code is working and is under testing. Please, try it in your use-case and leave us a feedback.
From pip:
pip install --pre pytorch-ignite
From conda (this suggests to install pytorch nightly release instead of stable version as dependency):
conda install ignite -c pytorch-nightly
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@ANUBHAVNATANI, @Bibonaut, @Evpok, @Hiroshiba, @JeroenDelcour, @Mxbonn, @anmolsjoshi, @asford, @bosr, @johnstill, @marrrcin, @vfdev-5, @willfrey
We removed deprecated metric classes BinaryAccuracy
and CategoricalAccuracy
and which are replaced by Accuracy
.
Multilabel option for Accuracy
, Precision
, Recall
metrics.
Added other metrics:
Operations on metrics: p = Precision(average=False)
mean_precision = p.mean()
precision_no_bg = p[1:]
Improved our docs with more examples.
Added FAQ section with best practices.
Bug fixes
TensorboardLogger
VisdomLogger
PolyaxonLogger
ProgressBar
CustomPeriodicEvent
We also provide pip/conda
nighlty builds with pytorch-nightly
as dependency:
pip install pytorch-ignite-nightly
or
conda install -c pytorch ignite-nightly
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
Bibonaut, IlyaOvodov, TheCodez, anmolsjoshi, fabianschilling, maaario, snowyday, vfdev-5, willprice, zasdfgbnm, zippeurfou
vfdev-5 would like also to thank his wife and newborn baby girl Nina for their support while working on this release !
Thanks to all our contributors !