High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
ignite.contrib.metrics
and ignite.contrib.handlers
to ignite.handlers
and ignite.metrics
. Thanks to @leej3!MPS
Backend [without torch.amp.autocast ] + CI by @vfdev-5 in https://github.com/pytorch/ignite/pull/3041
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project
Thank you !
https://github.com/pytorch/ignite/compare/v0.4.13...v0.5.0.post1
idist.one_rank_first
method by @AlexanderChaptykov in https://github.com/pytorch/ignite/pull/2926
CosineAnnealingWarmRestarts
by @AlexanderChaptykov in https://github.com/pytorch/ignite/pull/2938
native::_do_all_gather
related to group
by @sadra-barikbin in https://github.com/pytorch/ignite/pull/2947
save_handler
type in Checkpoint
class by @sadra-barikbin in https://github.com/pytorch/ignite/pull/3013
Checkpoint::reload_objects
when save_handler
is not of type DiskSaver
by @sadra-barikbin in https://github.com/pytorch/ignite/pull/3059
ProgressBar
's docstring by @sadra-barikbin in https://github.com/pytorch/ignite/pull/3063
RunningAverage
and Rouge
serializable by @sadra-barikbin in https://github.com/pytorch/ignite/pull/3035
Full Changelog: https://github.com/pytorch/ignite/compare/v0.4.12...v0.4.13
model_transform
to create_supervised_evaluator
so that user be able to transform model output into actual prediction (y_pred
) (#2896)NeptuneLogger
(#2881)ClearMLLogger
. Accessing attributes of the logger, retrieves those of the underlying clearml task. get_task
method is also added (#2898)score_sign
to add_early_stopping_by_val_score
and gen_save_best_models_by_val_score
to support both error-like and accuracy-like scores (#2898)Events
in Python3.11 (#2907)NeptuneSaver
(#2900, #2902)🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@AlexanderChaptykov, @DeepC004, @Hummer12007, @divij-pawar, @guptaaryan16, @kshitij12345, @moienr, @normandy7, @sadra-barikbin, @sallycaoyu, @twolodzko, @vfdev-5
Full Changelog: https://github.com/pytorch/ignite/compare/v0.4.11...v0.4.12
before
and after
events filters (#2727)every
and before
/after
event filters (#2860)once
event filter can accept a sequence of int (#2858)# "once" event filter
@engine.on(Events.ITERATION_STARTED(once=[50, 60]))
def call_once(engine):
# do something on 50th and 60th iterations
# "before" and "after" event filter
@engine.on(Events.EPOCH_STARTED(after=10, before=30))
def call_after_and_before(engine):
# do something in 11 to 29 epoch
# Mixing "every" and "before" / "after" event filters
@engine.on(Events.EPOCH_STARTED(every=5, after=8, before=25))
def call_after_and_before_every(engine):
# do something on 9, 14, 19, 24 epochs
model_transform
in create supervised trainer
(#2848)idist.all_gather
to take group
arg (#2715)idist.all_reduce
to take group
arg (#2712)idist.new_group
method (#2711)LRFinder
to have more than one parameter (#2704)get_param
method to ParamGroupScheduler
(#2720)TrainsLoger
and TrainsSaver
also removed the BC code (#2742)RocCurve
(#2802)EpochMetric
and made it idempotent (#2800)LRScheduler
issue and fixed CI (#2780)ModuleNotFoundError
instead of RuntimeError
(#2750)sync_all_reduce
to cover update->compute->update case (#2803)#2875, #2872, #2871, #2869, #2868, #2867, #2866, #2864, #2863, #2854, #2852, #2840, #2849, #2844, #2839, #2838, #2835, #2826, #2822, #2820, #2807, #2805, #2795, #2788, #2787, #2798, #2793, #2790, #2786, #2778, #2777, #2765, #2760, #2759, #2757, #2751, #2750, #2748, #2741, #2739, #2736, #2730, #2729, #2726, #2724, #2722, #2721, #2719, #2718, #2717, #2706, #2705, #2701, #2432
Drop python 3.7 from CI (#2836)
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@DeepC004, @JakubDz2208, @Moh-Yakoub, @RishiKumarRay, @abhi-glitchhg, @crj1998, @guptaaryan16, @louis-she, @pacificdragon, @puhuk, @sadra-barikbin, @sallycaoyu, @soma2000-lang, @theory-in-progress, @vfdev-5, @ydcjeff
Example:
from ignite.engine import Engine, Events
data = range(10)
max_epochs = 3
def check_input_data(e, b):
print(f"Epoch {engine.state.epoch}, Iter {engine.state.iteration} | data={b}")
i = (e.state.iteration - 1) % len(data)
assert b == data[i]
engine = Engine(check_input_data)
@engine.on(Events.ITERATION_COMPLETED(every=11))
def call_interrupt():
engine.interrupt()
print("Start engine run with interruptions:")
state = engine.run(data, max_epochs=max_epochs)
print("1 Engine run is interrupted at ", state.epoch, state.iteration)
state = engine.run(data, max_epochs=max_epochs)
print("2 Engine run is interrupted at ", state.epoch, state.iteration)
state = engine.run(data, max_epochs=max_epochs)
print("3 Engine ended the run at ", state.epoch, state.iteration)
Start engine run with interruptions:
Epoch 1, Iter 1 | data=0
Epoch 1, Iter 2 | data=1
Epoch 1, Iter 3 | data=2
Epoch 1, Iter 4 | data=3
Epoch 1, Iter 5 | data=4
Epoch 1, Iter 6 | data=5
Epoch 1, Iter 7 | data=6
Epoch 1, Iter 8 | data=7
Epoch 1, Iter 9 | data=8
Epoch 1, Iter 10 | data=9
Epoch 2, Iter 11 | data=0
1 Engine run is interrupted at 2 11
Epoch 2, Iter 12 | data=1
Epoch 2, Iter 13 | data=2
Epoch 2, Iter 14 | data=3
Epoch 2, Iter 15 | data=4
Epoch 2, Iter 16 | data=5
Epoch 2, Iter 17 | data=6
Epoch 2, Iter 18 | data=7
Epoch 2, Iter 19 | data=8
Epoch 2, Iter 20 | data=9
Epoch 3, Iter 21 | data=0
Epoch 3, Iter 22 | data=1
2 Engine run is interrupted at 3 22
Epoch 3, Iter 23 | data=2
Epoch 3, Iter 24 | data=3
Epoch 3, Iter 25 | data=4
Epoch 3, Iter 26 | data=5
Epoch 3, Iter 27 | data=6
Epoch 3, Iter 28 | data=7
Epoch 3, Iter 29 | data=8
Epoch 3, Iter 30 | data=9
3 Engine ended the run at 3 30
Events.default_event_filter
with None (#2644)terminate
and terminate_epoch
logic (#2645)Checkpoint
in a distributed configuration (#2658, #2642)save_on_rank
argument to DiskSaver
and Checkpoint
(#2641)handle_buffers
option for EMAHandler
(#2592)np.median
-compatible torch median implementation (#2681)Engine.terminate()
behaviour when resumed (#2678)#2700, #2698, #2696, #2695, #2694, #2691, #2688, #2679, #2676, #2675, #2673, #2671, #2670, #2668, #2667, #2666, #2665, #2664, #2662, #2660, #2659, #2657, #2656, #2655, #2653, #2652, #2651, #2647, #2646, #2640, #2639, #2637, #2630, #2629, #2628, #2625, #2624, #2620, #2618, #2617, #2616, #2613, #2611, #2609, #2606, #2605, #2604, #2601, #2597, #2584, #2581, #2542
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@BowmanChow, @daniellepintz, @haochunchang, @kamalojasv181, @puhuk, @sadra-barikbin, @sandylaker, @sdesrozis, @vfdev-5
whitelist
argument to log only desired weights/grads with experiment tracking system handlers: #2550, #2523ReduceLROnPlateauScheduler
parameter scheduler: #2449Checkpoint
: #2498ModelCheckpoint
, parity with Checkpoint
: #2486LRScheduler
is now attachable to Events.ITERATION_STARTED
: #2496zero_grad
place in create_supervised_trainer
resulting in grad zero logs: #2560, #2559, #2555, #2547Checkpoint
when loading a single non-nn.Module
object: #2487Metric.reset/update
are not decorated: #2549compute
method now returns float
instead of torch.Tensor
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@Davidportlouis, @DevPranjal, @Ishan-Kumar2, @KevinMusgrave, @Moh-Yakoub, @asmayer, @divo12, @gorarakelyan, @jreese, @leotac, @nishantb06, @nmcguire101, @sadra-barikbin, @sayantan1410, @sdesrozis, @vfdev-5, @yuta0821
Engine.run
(#2369)Checkpoint.load_objects
can accept str
and load the checkpoint internally (#2305)DeterministicEngine.state_dict()
(#2412)EMAHandler
warm-up behaviour (#2333)_compute_nproc_per_node
in case of bad dist configuration (#2288)EMAHandler
(#2326)StateParamScheduler.attach
method (#2316)ClearMLLogger
to retrieve current task before trying to create a new one (#2344)🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@Abo7atm, @DevPranjal, @Eunjnnn, @FarehaNousheen, @H4dr1en, @Ishan-Kumar2, @KickItLikeShika, @Priyansi, @bibhabasumohapatra, @fco-dv, @louis-she, @sandylaker, @sdesrozis, @trsvchn, @vfdev-5, @ydcjeff
LRFinder
to run multiple epochs (#2200)save_handler
automatically detects DiskSaver
when path passed (#2198)Checkpoint
to use score_name
as metric's key (#2146)State
parameter scheduler (#2090)auto_optim
to allow gradient accumulation (#2169)BasicTimeProfiler
, HandlersTimeProfiler
, ParamScheduler
, LRFinder
to core (#2136, #2135, #2132)🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@Chandan-h-509, @Ishan-Kumar2, @KickItLikeShika, @Priyansi, @fco-dv, @gucifer, @kennethleungty, @logankilpatrick, @mfoglio, @sandylaker, @sdesrozis, @theory-in-progress, @toxa23, @trsvchn, @vfdev-5, @ydcjeff
start_lr
option to FastaiLRFinder
(#2111)scontrol
(#2092)MetricsLambda
to work with reset/update/compute
API (#2091)auto_dataloader
to not wrap user provided DistributedSampler
(#2119)DistributedProxySampler
when sampler is already a DistributedSampler
(#2120)py.typed
for type checkers (#2095)🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@01-vyom, @KickItLikeShika, @gucifer, @sandylaker, @schuhschuh, @sdesrozis, @trsvchn, @vfdev-5, @ydcjeff
sync_all_reduce
API (#1823)EpochMetric
more generic by extending the list of valid types (#1748)required_output_keys
(#2027)torch.cuda.amp
and apex
automatic mixed precision for create_supervised_trainer
and create_supervised_evaluator
(#1714, #1589)state.batch/state.output
lifespan in Engine (#1919)auto_dataloader
(#2028)required_output_keys
(#2027)safe_mode
for idist
broadcast (#1839)idist
to support different init_methods
(#1767)EpochOutputStore
data on engine.state
, moved to core (#1982, #1974)ignite.utils.manual_seed
(#1970)multi_label
, not averaged configuration for DDP (#1646)PolyaxonLogger
to handle v1 and v0 (#1625)*args
, **kwargs
to BaseLogger.attach method
(#2034)ProgressBar
(#1937)ProgressBar
(#2079)nltk-smooth2
for BLEU metric (#1911)_do_manual_all_reduce
(#1848)mnist_save_resume_engine.py
example (#2077)DeterministicEngine
(#2081)🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):
@01-vyom, @Devanshu24, @Juddd, @KickItLikeShika, @Moh-Yakoub, @Muktan, @OBITORASU, @Priyansi, @afzal442, @ahmedo42, @aksg87, @aniezurawski, @cozek, @devrimcavusoglu, @fco-dv, @gucifer, @log-layer, @mouradmourafiq, @radekosmulski, @sahilg06, @sdesrozis, @sparkingdark, @thomasjpfan, @touqir14, @trsvchn, @vfdev-5, @ydcjeff