Collection of recent methods on (deep) neural network compression and acceleration.
A collection of recent methods on DNN compression and acceleration. There are mainly 5 kinds of methods for efficient DNNs:
Note, this repo is more about pruning (with lottery ticket hypothesis or LTH as a sub-topic), KD, and quantization. For other topics like NAS, see more comprehensive collections (## Related Repos and Websites) at the end of this file. Welcome to send a pull request if you'd like to add any pertinent papers.
Other repos:
About abbreviation: In the list below,
o
for oral,s
for spotlight,b
for best paper,w
for workshop.
1980s,1990s
2000s
2011
2013
2014
2015
2016
2017
2018
2019
2020
2021
For LTH and other Pruning at Initialization papers, please refer to Awesome-Pruning-at-Initialization.
Before 2014
2014
2016
2017
2018
2019
2020
2021