Pytorch Ignite Versions Save

High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

v0.4.4.post1

3 years ago

PyTorch-Ignite 0.4.4 - Release Notes

Bug fixes:

  • BC-breaking Moved detach outside of loss function computation (#1675, #1692)
  • Added eps to avoid nans in canberra error (#1699)
  • Removed size limitation for str on collective ops (#1702)
  • Fixed imports in docker images and now install Pillow-SIMD (#1638, #1639, #1628, #1711)

Doc improvements

  • #1645, #1653, #1654, #1671, #1672, #1691, #1687, #1686, #1685, #1684, #1676, #1688

Other improvements

  • Fixed artifacts urls for pypi (#1629)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

@Devanshu24, @KickItLikeShika, @Moh-Yakoub, @OBITORASU, @ahmedo42, @fco-dv, @sparkingdark, @touqir14, @trsvchn, @vfdev-5, @y0ast, @ydcjeff

v0.4.3

3 years ago

PyTorch-Ignite 0.4.3 - Release Notes

🎉 Since september we have a new logo (#1324) 🎉

Core

Metrics

  • [BC-breaking] Made Metrics accumulate values on device specified by user (#1238)
  • Fixes BC if custom metric returns a dict (#1478)
  • Added PSNR metric (#1570, #1595)

Handlers

  • Checkpoint can save model with same filename (#1423)
  • Add greater_or_equal option to Checkpoint handler (#1597)
  • Update handlers to use setup_logger (#1617)
  • Added TimeLimit handler (#1611)

Distributed helper module

  • Distributed cpu tests on windows (#1429)
  • Added kwargs to idist.auto_model (#1552)
  • Improved horovod initializer (#1559)

Others

  • Dropped python 3.5 support (#1500)
  • Added torch.cuda.manual_seed_all to ignite.utils.manual_seed (#1444)
  • Fixed to_onehot function to be torch scriptable (#1592)
  • Introduced standard stream for logger setup helper (#1601)

Docker images

  • Removed Entrypoint from Dockerfile and images (#1475)

Examples

Contrib

Metrics

  • Improved Canberra metric for DDP (#1314)
  • Improve ManhattanDistance metric for DDP (#1320)
  • Improve R2Score metric for DDP (#1318)

Handlers

  • Added new time profiler HandlersTimeProfiler which allows per handler time profiling (#1398, #1474)
  • Fixed attach_opt_params_handler to return RemovableEventHandle (#1502)
  • Renamed TrainsLogger to ClearMLLogger keeping BC (#1557, #1560)

Documentation improvements

  • #1330, #1337, #1338, #1353, #1360, #1374, #1373, #1394, #1393, #1401, #1435, #1460, #1461, #1465, #1536, #1542 ...
  • Update Shpinx to v3.2.1. (#1356, #1372)

Codebase is MyPy checked

  • #1349, #1351, #1352, #1355, #1362, #1363, #1370, #1379, #1418, #1419, #1416, #1447, #1484

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

@1nF0rmed, @Amab, @BanzaiTokyo, @Devanshu24, @Nic-Ma, @RaviTezu, @SamuelMarks, @abdulelahsm, @afzal442, @ahmedo42, @dgarth, @fco-dv, @gruebel, @harsh8398, @ibotdotout, @isabela-pf, @jkhenning, @josselineperdomo, @jrieke, @n2cholas, @ramesht007, @rzats, @sdesrozis, @shngt, @sroy8091, @theodumont, @thescripted, @timgates42, @trsvchn, @uribgp, @vcarpani, @vfdev-5, @ydcjeff, @zhxxn

v0.4.2

3 years ago

PyTorch-Ignite 0.4.2 - Release Notes

Core

New Features and bug fixes

  • Added SSIM metric (#1217)

  • Added prebuilt Docker images (#1218)

  • Added distributed support for EpochMetric and related metrics (#1229)

  • Added required_output_keys public attribute (#1291)

  • Pre-built docker images for computer vision and nlp tasks powered with Nvidia/Apex, Horovod, MS DeepSpeed (#1304 #1248 #1218 )

Handlers and utils

  • Allow passing keyword arguments to save function on Checkpoint (#1245)

Distributed helper module

  • Added support of Horovod (#1195)
  • Added idist.broadcast (#1237)
  • Added sync_bn option to idist.auto_model (#1265)

Contrib

New Features and bug fixes

  • Added EpochOutputStore handler (#1226)
  • Improved displayed tag for tqdm progress bar (#1279)
  • Fixed bug with ParamGroupScheduler with schedulers based on different optimizers (#1274)

And a lot of house-keeping Pre-September Hacktoberfest contributions

  • Added initial Mypy check at CI step (#1296)
  • Fixed typo in docs (concepts) (#1295)
  • Fixed link to pytorch documents (#1294)
  • Removed prints from tests (#1292)
  • Downgraded tqdm version to stabilize the CI (#1293)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

@M3L6H, @Tawishi, @WrRan, @ZhiliangWu, @benji011, @fco-dv, @kamahori, @kenjihiraoka, @kilsenp, @n2cholas, @nzare, @sdesrozis, @theodumont, @vfdev-5, @ydcjeff,

v0.4.1

3 years ago

PyTorch-Ignite 0.4.1 - Release Notes

Core

New Features and bug fixes

  • Improved docs for custom events (#1179)

Handlers and utils

  • Added custom filename pattern for saving checkpoints (#1127)

Distributed helper module

  • Improved namings in _XlaDistModel (#1173)
  • Minor optimization for idist.get_* methods (#1196)
  • Fixed distributed proxy sampler runtime error (#1192)
  • Fixes bug using idist with "nccl" backend and torch cuda is not available (#1166)
  • Fixed issue with logging XLA tensors (#1207)

Contrib

New Features and bug fixes

  • Fixes warning about "TrainsLogger output_handler can not log metrics value" (#1170)
  • Improved usage of contrib common methods with other save handlers (#1171)

Examples

  • Improved Pascal Voc example (#1193)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

@Joel-hanson, @WrRan, @jspisak, @marload, @ryanwongsa, @sdesrozis, @vfdev-5

v0.4.0.post1

3 years ago

PyTorch-Ignite 0.4.0 - Release Notes

Core

BC breaking changes

  • Simplified engine - BC breaking change (#940 #939 #938)
    • no more internal patching of torch DataLoader.
    • seed argument of Engine.run is deprecated.
    • previous behaviour can be achieved with DeterministicEngine, introduced in #939.
  • Make all Events be CallableEventsWithFilter (#788).
  • Make ignite compatible only with pytorch >=1.3 (#1016, #1150).
    • ignite is tested on the latest and nightly versions of pytorch.
    • exact compatibility with previous versions can be checked here.
  • Remove deprecated arguments from BaseLogger (#1051).
  • Deprecated CustomPeriodicEvent (#984).
  • RunningAverage now computes output quantity average instead of a sum in DDP (#991).
  • Checkpoint stores now files with .pt extension instead of .pth (#873).
  • Arguments archived of Checkpoint and ModelCheckpoint are deprecated (#873).
  • Now create_supervised_trainer and create_supervised_evaluator do not move model to device (#910).

See also migration note for details on how to update your code.

New Features and bug fixes

Ignite Distributed [Experimental]

  • Introduction of ignite.distributed as idist module (#1045)
    • common interface for distributed applications and helper methods, e.g. get_world_size(), get_rank(), ...
    • supports native torch distributed configuration, XLA devices.
    • metrics computation works in all supported distributed configurations: GPUs and TPUs.
    • Parallel utility and auto module (#1014).

Engine & Events

  • Add flexibility on event handlers by packing triggering events (#868).
  • Engine argument is now optional in event handlers (#889, #919).
  • We initialize engine.state before calling engine.run (#1028).
  • Engine can run on dataloader based on IterableDataset and without specifying epoch_length (#1077).
  • Added user keys into Engine's state dict (#914).
  • Bug fixes in Engine class (#1048, #994).
  • Now epoch_length argument is optional (#985)
    • suitable to work with finite-unknown-length iterators.
  • Added times in engine.state (#958).

Metrics

  • Add Frequency metric for ops/s calculations (#760, #783, #976).
  • Metrics computation can be customized with introduced MetricUsage (#979, #1054)
    • batch-wise/epoch-wise or customly programmed metric's update and compute methods.
  • Metric can be detached (#827).
  • Fixed bug in RunningAverage when output is torch tensor (#943).
  • Improved computation performance of EpochMetric (#967).
  • Fixed average recall value of ConfusionMatrix (#846).
  • Now metrics can be serialized using dill (#930).
  • Added support for nested metric values (#968).

Handlers and utils

  • Checkpoint : improved filename when score value is Integer (#758).
  • Checkpoint : fix returning worst model of the saved models. (#745).
  • Checkpoint : load_objects can load single object checkpoints (#772).
  • Checkpoint : we now save only one checkpoint per priority (#847).
  • Checkpoint : added kwargs to Checkpoint.load_objects (#861).
  • Checkpoint : now saves model.module.state_dict() for DDP and DP (#1086).
  • Checkpoint and related: other improvements (#937).
  • Checkpoint and EarlyStopping become stateful (#1156)
  • Support namedtuple for convert_tensor (#740).
  • Added decorator one_rank_only (#882).
  • Update common.py (#904).

Contrib

  • Added FastaiLRFinder (#596).

Metrics

  • Added Roc Curve and Precision/Recall Curve to the metrics (#875).

Parameters scheduling

  • Enabled multi params group for LRScheduler (#1027).
  • Parameters scheduling improvements (#1072, #859).
  • Parameters scheduler can work on torch optimizer and any object with attribute param_groups (#1163).

Support of experiment tracking systems

  • Add NeptuneLogger (#730, #821, #951, #954).
  • Add TrainsLogger (#1020, #1036, #1043).
  • Add WandbLogger (#926).
  • Added visdom_logger to common module (#796).
  • TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
  • Simplified BaseLogger attach APIs (#1006).
  • Added kwargs to loggers' constructors and respective setup functions (#1015).

Time profiling

  • Added basic time profiler to contrib.handlers (#729).

Bug fixes (some of PRs)

  • ProgressBar output not in sync with epoch counts (#773).
  • Fixed ProgressBar.log_message (#768).
  • Progressbar now accounts for epoch_length argument (#785).
  • Fixed broken ProgressBar if data is iterator without epoch length (#995).
  • Improved setup_logger for multiple calls (#962).
  • Fixed incorrect log position (#1099).
  • Added missing colon to logging message (#1101).
  • Fixed order of checkpoint saving and candidate removal (#1117)

Examples

  • Basic example of FastaiLRFinder on MNIST (#838).
  • CycleGAN auto-mixed precision training example with NVidia/Apex or native torch.cuda.amp (#888).
  • Added setup_logger to mnist examples (#953).
  • Added MNIST example on TPU (#956).
  • Benchmark amp on Cifar100 (#917).
  • Updated ImageNet and Pascal VOC12 examples (#1125 #1138)

Housekeeping

  • Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092, ...).
  • Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093, #1113, ...).
  • Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058, ...).
  • Added Serializable in mixins (#1000).
  • Merge of EpochMetric in _BaseRegressionEpoch (#970).
  • Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
  • Drop Python 2 support finalized (#806).
  • Splits engine into multiple parts (#724).
  • Add Python 3.8 to Conda builds (#781).
  • Black formatted codebase with pre-commit files (#792).
  • Activate dpl v2 for Travis CI (#804).
  • AutoPEP8 (#805).
  • Fixed device conversion method (#887).
  • Refactored deps installation (#931).
  • Return handler in helpers (#997).
  • Fixes #833 (#1001).
  • Disable propagation of loggers to ancestrors (#1013).
  • Consistent PEP8-compliant imports layout (#901).

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @ItamarWilf, @Joxis, @Muhamob, @Yevgnen, @amatsukawa @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards

v0.4rc.0.post1

3 years ago

PyTorch-Ignite 0.4.0 RC - Release Notes

Core

BC breaking changes

  • Simplified engine - BC breaking change (#940 #939 #938)
    • no more internal patching of torch DataLoader.
    • seed argument of Engine.run is deprecated.
    • previous behaviour can be achieved with DeterministicEngine, introduced in #939.
  • Make all Events be CallableEventsWithFilter (#788).
  • Make ignite compatible only with pytorch >1.0 (#1016).
    • ignite is tested on the latest and nightly versions of pytorch.
    • exact compatibility with previous versions can be checked here.
  • Remove deprecated arguments from BaseLogger (#1051).
  • Deprecated CustomPeriodicEvent (#984).
  • RunningAverage now computes output quantity average instead of a sum in DDP (#991).
  • Checkpoint stores now files with .pt extension instead of .pth (#873).
  • Arguments archived of Checkpoint and ModelCheckpoint are deprecated (#873).
  • Now create_supervised_trainer and create_supervised_evaluator do not move model to device (#910).

New Features and bug fixes

Ignite Distributed [Experimental]

  • Introduction of ignite.distributed as idist module (#1045)
    • common interface for distributed applications and helper methods, e.g. get_world_size(), get_rank(), ...
    • supports native torch distributed configuration, XLA devices.
    • metrics computation works in all supported distributed configurations: GPUs and TPUs.

Engine & Events

  • Add flexibility on event handlers by packing triggering events (#868).
  • Engine argument is now optional in event handlers (#889, #919).
  • We initialize engine.state before calling engine.run (#1028).
  • Engine can run on dataloader based on IterableDataset and without specifying epoch_length (#1077).
  • Added user keys into Engine's state dict (#914).
  • Bug fixes in Engine class (#1048, #994).
  • Now epoch_length argument is optional (#985)
    • suitable to work with finite-unknown-length iterators.
  • Added times in engine.state (#958).

Metrics

  • Add Frequency metric for ops/s calculations (#760, #783, #976).
  • Metrics computation can be customized with introduced MetricUsage (#979, #1054)
    • batch-wise/epoch-wise or customly programmed metric's update and compute methods.
  • Metric can be detached (#827).
  • Fixed bug in RunningAverage when output is torch tensor (#943).
  • Improved computation performance of EpochMetric (#967).
  • Fixed average recall value of ConfusionMatrix (#846).
  • Now metrics can be serialized using dill (#930).
  • Added support for nested metric values (#968).

Handlers and utils

  • Checkpoint : improved filename when score value is Integer (#758).
  • Checkpoint : fix returning worst model of the saved models. (#745).
  • Checkpoint : load_objects can load single object checkpoints (#772).
  • Checkpoint : we now save only one checkpoint per priority (#847).
  • Checkpoint : added kwargs to Checkpoint.load_objects (#861).
  • Checkpoint : now saves model.module.state_dict() for DDP and DP (#1086).
  • Checkpoint and related: other improvements (#937).
  • Support namedtuple for convert_tensor (#740).
  • Added decorator one_rank_only (#882).
  • Update common.py (#904).

Contrib

  • Added FastaiLRFinder (#596).

Metrics

  • Added Roc Curve and Precision/Recall Curve to the metrics (#875).

Parameters scheduling

  • Enabled multi params group for LRScheduler (#1027).
  • Parameters scheduling improvements (#1072, #859).

Support of experiment tracking systems

  • Add NeptuneLogger (#730, #821, #951, #954).
  • Add TrainsLogger (#1020, #1036, #1043).
  • Add WandbLogger (#926).
  • Added visdom_logger to common module (#796).
  • TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
  • Simplified BaseLogger attach APIs (#1006).
  • Added kwargs to loggers' constructors and respective setup functions (#1015).

Time profiling

  • Added basic time profiler to contrib.handlers (#729).

Bug fixes (some of PRs)

  • ProgressBar output not in sync with epoch counts (#773).
  • Fixed ProgressBar.log_message (#768).
  • Progressbar now accounts for epoch_length argument (#785).
  • Fixed broken ProgressBar if data is iterator without epoch length (#995).
  • Improved setup_logger for multiple calls (#962).
  • Fixed incorrect log position (#1099).
  • Added missing colon to logging message (#1101).

Examples

  • Basic example of FastaiLRFinder on MNIST (#838).
  • CycleGAN auto-mixed precision training example with NVidia/Apex or native torch.cuda.amp (#888).
  • Added setup_logger to mnist examples (#953).
  • Added MNIST example on TPU (#956).
  • Benchmark amp on Cifar100 (#917).
  • TrainsLogger semantic segmentation example (#1095).

Housekeeping (some of PRs)

  • Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092).
  • Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093).
  • Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058).
  • Added Serializable in mixins (#1000).
  • Merge of EpochMetric in _BaseRegressionEpoch (#970).
  • Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
  • Drop Python 2 support finalized (#806).
  • Dynamic typing (#723).
  • Splits engine into multiple parts (#724).
  • Add Python 3.8 to Conda builds (#781).
  • Black formatted codebase with pre-commit files (#792).
  • Activate dpl v2 for Travis CI (#804).
  • AutoPEP8 (#805).
  • Fixes nightly version bug (#809).
  • Fixed device conversion method (#887).
  • Refactored deps installation (#931).
  • Return handler in helpers (#997).
  • Fixes #833 (#1001).
  • Disable propagation of loggers to ancestrors (#1013).
  • Consistent PEP8-compliant imports layout (#901).

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @ItamarWilf, @Joxis, @Muhamob, @Yevgnen, @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards

v0.3.0

4 years ago

Core

  • Added State repr and input batch as engine.state.batch (#641)
  • Adapted core metrics only to be used in distributed configuration (#635)
  • Added fbeta metric as core metric (#653)
  • Added event filtering feature (e.g. every/once/event filter logic) (#656)
  • BC breaking change: Refactor ModelCheckpoint into Checkpoint + DiskSaver / ModelCheckpoint (#673)
    • Added option n_saved=None to store all checkpoints (#703)
  • Improved accumulation metrics (#681)
  • Early stopping min delta (#685)
  • Droped Python 2.7 support (#699)
  • Added feature: Metric can accept a dictionary (#689)
  • Added Dice Coefficient metric (#680)
  • Added helper method to simplify the setup of class loggers (#712)

Engine refactoring (BC breaking change)

Finally solved the issue #62 to resume training from an epoch or iteration

  • Engine refactoring + features (#640)
    • engine checkpointing
    • variable epoch lenght defined by epoch_length
    • two additional events: GET_BATCH_STARTED and GET_BATCH_COMPLETED
    • cifar10 example with save/resume in distributed conf

Contrib

  • Improved create_lr_scheduler_with_warmup (#646)
  • Added helper method to plot param scheduler values with matplotlib (#650)
  • BC Breaking change: with multiple optimizer's param groups (#690)
    • Added state_dict/load_state_dict (#690)
  • BC Breaking change: Let the user specify tqdm parameters for log_message (#695)

Examples

  • Added an example of hyperparameters tuning with Ax on CIFAR10 (#652)
  • Added CIFAR10 distributed example

Reproducible trainings as "References"

Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

Features:

  • Distributed training with mixed precision by nvidia/apex
  • Experiments tracking with MLflow or Polyaxon

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

@anubhavashok, @kagrze, @maxfrei750, @vfdev-5

v0.2.1

4 years ago

Core

Various improvements in the core part of the library:

  • Add epoch_bound parameter to RunningAverage (#488)

  • Bug fixes with Confusion matrix, new implementation (#572) - BC breaking

  • Added event_to_attr in register_events (#523)

  • Added accumulative single variable metrics (#524)

  • should_terminate is reset between runs (#525)

  • to_onehot returns tensor with uint8 dtype (#571) - may be BC breaking

  • Removable handle returned from Engine.add_event_handler() to enable single-shot events (#588)

  • New documentation style 🎉

Distributed

We removed mnist distrib example as being misleading and provided distrib branch(XX/YY/2020: distrib branch merged to master) to adapt metrics for distributed computation. Code is working and is under testing. Please, try it in your use-case and leave us a feedback.

Now in Contributions module

  • Added mlflow logger (#558)
  • R-Squared Metric in regression metrics module (#496)
  • Add tag field to OptimizerParamsHandler (#502)
  • Improved ProgressBar with TerminateOnNan (#506)
  • Support for layer freezing with Tensorboard integration (#515)
  • Improved OutputHandler API (#531)
  • Improved create_lr_scheduler_with_warmup (#556)
  • Added "all" option to metric_names in contrib loggers (#565)
  • Added GPU usage info as metric (#569)
  • Other bug fixes

Notebook examples

  • Added Cycle-GAN notebook (#500)
  • Finetune EfficientNet-B0 on CIFAR100 (#544)
  • Added Fashion MNIST jupyter notebook (#549)

Updated nighlty builds

From pip:

pip install --pre pytorch-ignite

From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

conda install ignite -c pytorch-nightly

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

@ANUBHAVNATANI, @Bibonaut, @Evpok, @Hiroshiba, @JeroenDelcour, @Mxbonn, @anmolsjoshi, @asford, @bosr, @johnstill, @marrrcin, @vfdev-5, @willfrey

v0.2.0

5 years ago

Core

  • We removed deprecated metric classes BinaryAccuracy and CategoricalAccuracy and which are replaced by Accuracy.

  • Multilabel option for Accuracy, Precision, Recall metrics.

  • Added other metrics:

  • Operations on metrics: p = Precision(average=False)

    • apply PyTorch operators: mean_precision = p.mean()
    • indexing: precision_no_bg = p[1:]
  • Improved our docs with more examples.

  • Added FAQ section with best practices.

  • Bug fixes

Now in Contributions module

Notebook examples

  • VAE on MNIST
  • CNN for text classification

Nighlty builds with pytorch-nightly as dependency

We also provide pip/conda nighlty builds with pytorch-nightly as dependency:

pip install pytorch-ignite-nightly

or

conda install -c pytorch ignite-nightly 

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 ! 💯 We really appreciate your implication into the project (in alphabetical order):

Bibonaut, IlyaOvodov, TheCodez, anmolsjoshi, fabianschilling, maaario, snowyday, vfdev-5, willprice, zasdfgbnm, zippeurfou

vfdev-5 would like also to thank his wife and newborn baby girl Nina for their support while working on this release !

v0.1.2

5 years ago
  • Improve and fix bug with binary accuracy, precision, recall
  • Metrics arithmetics
  • ParamScheduler to support multiple optimizers/multiple parameter groups

Thanks to all our contributors !