MNIST Multitask Save

6️⃣6️⃣6️⃣ Reproduce ICLR '18 under-reviewed paper "MULTI-TASK LEARNING ON MNIST IMAGE DATASETS"

Project README

MNIST-multitask

Reproduce ICLR2018 under-reviewed paper MULTI-TASK LEARNING ON MNIST IMAGE DATASETS

the paper argues that pre-train network with MNIST-like dataset can boost performance

results

dataset single-task M+F single-task (paper reported) M+F (paper reported)
MNIST 0.996 0.9956 0.9956 0.9971
FashionMNIST 0.9394 0.942 0.9432 0.9518

discussion

in my reproduction, FashionMNIST performs better with MNIST+FashionMNIST pre-trained first

but MNIST doesn't enjoy the benefits of pre-training.

The bias between reproduction and paper can result from preprocess of data.

Author

Po-Chih Huang / @pochih

Open Source Agenda is not affiliated with "MNIST Multitask" Project. README Source: pochih/MNIST-multitask
Stars
42
Open Issues
2
Last Commit
5 years ago

Open Source Agenda Badge

Open Source Agenda Rating