MNIST classification using Convolutional NeuralNetwork. Various techniques such as data augmentation, dropout, batchnormalization, etc are implemented.
An implementation of convolutional neural-network (CNN) for MNIST with various techniques such as data augmentation, dropout, batchnormalization, etc.
CNN with 4 layers has following architecture.
The following techniques are employed to imporve performance of CNN.
The number of train-data is increased to 5 times by means of
All convolution/fully-connected layers use batch normalization.
The third fully-connected layer employes dropout technique.
A learning rate is decayed every after one-epoch.
Every model makes a prediction (votes) for each test instance and the final output prediction is the one that receives the highest number of votes.
python mnist_cnn_train.py
Training logs are saved in "logs/train". Trained model is saved as "model/model.ckpt".
python mnist_cnn_test.py --model-dir <model_directory> --batch-size <batch_size> --use-ensemble False
<model_directory>
is the location where a model to be testes is saved. Please do not specify filename of "model.ckpt".<batch_size>
is employed to reduce burden of memory of machine. The number of test data is 10,000 for MNIST. Different batch_size gives the same result, but requiring different memory size.You may command like python mnist_cnn_test.py --model-dir model/model01_99.61 --batch-size 5000 --use-ensemble False
.
python mnist_cnn_test.py --model-dir <model_directory> --batch-size <batch_size> --use-ensemble True
<model_directory>
is the location of root directory. The root directory contains the sub-directories containg each model.You may command like python mnist_cnn_test.py --model-dir model --batch-size 5000 --use-ensemble True
.
CNN with the same hyper-parameters has been trained 30 times, and gives the following results.
99.72% of accuracy is the 5th rank according to Here.
This implementation has been tested on Tensorflow r0.12.