Reference implementations of MLPerf™ training benchmarks
This is a repository of reference implementations for the MLPerf training benchmarks. These implementations are valid as starting points for benchmark implementations but are not fully optimized and are not intended to be used for "real" performance measurements of software frameworks or hardware.
Please see the MLPerf Training Benchmark paper for a detailed description of the motivation and guiding principles behind the benchmark suite. If you use any part of this benchmark (e.g., reference implementations, submissions, etc.) in academic work, please cite the following:
@misc{mattson2019mlperf,
title={MLPerf Training Benchmark},
author={Peter Mattson and Christine Cheng and Cody Coleman and Greg Diamos and Paulius Micikevicius and David Patterson and Hanlin Tang and Gu-Yeon Wei and Peter Bailis and Victor Bittorf and David Brooks and Dehao Chen and Debojyoti Dutta and Udit Gupta and Kim Hazelwood and Andrew Hock and Xinyuan Huang and Atsushi Ike and Bill Jia and Daniel Kang and David Kanter and Naveen Kumar and Jeffery Liao and Guokai Ma and Deepak Narayanan and Tayo Oguntebi and Gennady Pekhimenko and Lillian Pentecost and Vijay Janapa Reddi and Taylor Robie and Tom St. John and Tsuguchika Tabaru and Carole-Jean Wu and Lingjie Xu and Masafumi Yamazaki and Cliff Young and Matei Zaharia},
year={2019},
eprint={1910.01500},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
These reference implementations are still very much "alpha" or "beta" quality. They could be improved in many ways. Please file issues or pull requests to help us improve quality.
We provide reference implementations for benchmarks in the MLPerf suite, as well as several benchmarks under development.
Each reference implementation provides the following:
These benchmarks have been tested on the following machine configuration:
Generally, a benchmark can be run with the following steps:
./download_dataset.sh
. This should be run outside of docker, on your host machine. This should be run from the directory it is in (it may make assumptions about CWD).verify_dataset.sh
to ensure the was successfully downloaded.Each benchmark will run until the target quality is reached and then stop, printing timing results.
Some these benchmarks are rather slow or take a long time to run on the reference hardware (i.e. 16 CPUs and one P100). We expect to see significant performance improvements with more hardware and optimized implementations.