Torchprune Versions Save

A research library for pytorch-based neural network pruning, compression, and more.

v2.2.0

1 year ago

The new release contains code for a new paper with accompanying code, comparison methods, models, and datasets.

In addition to the previous papers that were covered by this codebase (ALDS, PFP, SiPP, Lost), we also extended the repository to include our latest paper on pruning neural ODEs , which was presented at NeurIPS 2021:

Sparse Flows: Pruning Continuous-depth Models

Check out the READMEs for more info.

v2.1.0

2 years ago

The new release contains code for a new paper with accompanying code, comparison methods, models, and datasets.

In addition to the previous papers that were covered by this codebase (PFP, SiPP, Lost), we also extended the repository to include our latest paper on pruning, which will be presented at NeurIPS 2021:

Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition

Check out the READMEs for more info.

Detailed release update:

  • ALDS algorithm in torchprune.
  • Various tensor decomposition methods as comparisons for ALDS.
  • More network and dataset support, including Glue Benchmark and huggingface transformers.
  • Experiment code, visualization, and paper reproducibility for ALDS.

v.2.0.0

3 years ago

The new release contains major overhauls and improvements to the code base.

In addition to the previous two papers that were covered by this code base (PFP and SiPP), we also extended the code base to include our latest paper on pruning presented at MLSys 2021:

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy

Check out the READMEs for more info.

v1.1.0

3 years ago

Bug fixes, visualization updates, better logging, improved readability, simplified compression sub-module

v1.0.1

4 years ago

There was a bug in distributed training when using more than one GPU causing training to stall at the end of the last epoch.

v1.0.0

4 years ago

This is the version of the code as originally published for the ICLR'20 paper Provable Filter Pruning for Efficient Neural Networks.