CP and Tucker decomposition for Convolutional Neural Networks
The goal of this program is to decompose each convolutional layers in a model to reduce the total number of floating-point operations (I'll use the shorthand flops) in the convolutions as well as the number of parameters in the model.
This is an extension of the work https://github.com/jacobgil/pytorch-tensor-decompositions. In this implementation, everything, including finding the ranks and the actual CP/Tucker Decomposition, is done in PyTorch without switching to numpy.
python3 scripts/decomp.py [-p PATH] [-d DECOMPTYPE] [-m MODEL] [-r CHECKPOINT] [-s STATEDICT] [-v]
A pre-decomposed ResNet50 is included in the models directory as resnet50_tucker.pth.
The fine-tuned parameters for the model is the resnet50_tucker_state.pth in the models directory.
It turn out that Tucker decomposition yields lower accuracy loss than CP decomposition in my experiments, so the results below are all from Tucker decomposition.
Top-1 | Top-5 | flops in convolutions (Giga) | |
---|---|---|---|
Before | 56.55% | 79.09% | 1.31 |
After | 54.90% | 77.90% | 0.45 |
Top-1 | Top-5 | flops in convolutions (Giga) | |
---|---|---|---|
Before | 76.15% | 92.87% | 7.0 |
After | 74.88% | 92.39% | 4.7 |