Ck Crowdtuning Save

Collective Knowledge crowd-tuning extension to let users crowdsource their experiments (using portable Collective Knowledge workflows) such as performance benchmarking, auto tuning and machine learning across diverse platforms with Linux, Windows, MacOS and Android provided by volunteers. Demo of DNN crowd-benchmarking and crowd-tuning:

Project README

compatibility automation workflow

DOI License

All CK components can be found at cKnowledge.io and in one GitHub repository!

This project is hosted by the cTuning foundation.

This is a stable Collective Knowledge repository to enable customizable experiment crowdsourcing across diverse Linux, Windows, MacOS and Android-based platforms provided by volunteers (such as mobile devices/IoT, data centers and supercomputers).

logo

We have several public experimental scenarios include universal, customizable, multi-dimensional, multi-objective DNN crowd-benchmarking and compiler crowd-tuning.

See continuously aggregated public results results and unexpected behavior in the CK live repository!

Also check out our related Android apps to let you participate in our experiment crowdsourcing using spare Android mobile phones, tables and other devices:

Further details are available at CK wiki, open research challenges wiki and reproducible and CK-powered AI/SW/HW co-design competitions at ACM/IEEE conferences.

Description

This repository is based on CK machine-learning based autotuning. It crowdsources experiments (using optimization knobs exposed via CK such as OpenCL, compiler flag, CUDA, etc) across many machines while building a realistic, large and representative training set.

This is a continuation of Grigori Fursin's original postdoctoral proposal for the MILEPOST project in 2005, i.e. crowdsource training of a machine-learning based compiler across any shared computational resource such as mobile phones (supported by the non-profit cTuning foundation since 2008).

Authors

License

  • BSD, 3-clause

Prerequisites

Usage

See CK Getting Started Guide and the section on Experiment Crowdsourcing

You can also participate in crowd-benchmarking and crowd-tuning using your Android mobile device using the following apps:

Notes

We and the community added various analysis of variation of empirical characteristics such as execution time and energy: min, max, mean, expected values from histogram, normality test, etc.

Users can decide how to calculate improvements based on available statistics and their requirements. For example, when trying to improve compilers or hardware, we compare minimal characteristics (execution time, energy, etc), i.e. the best what we can squeeze from this hardware when there are no cache effects, contentions, etc.

Later, we suggest to calculate improvements using expected values - we noticed that computer systems has "states" (similar to electron energy states in physics), hence such improvements will show how a given program will behave in non-ideal conditions.

Furthermore, when there is more than one expected behavior, i.e. several states, we suggest to analyze such cases by the community and find missing experiment features that could explain and separate such states such as CPU/GPU frequency.

See our papers for more details.

Publications

The concepts have been described in the following publications:

@inproceedings{ck-date16,
    title = {{Collective Knowledge}: towards {R\&D} sustainability},
    author = {Fursin, Grigori and Lokhmotov, Anton and Plowman, Ed},
    booktitle = {Proceedings of the Conference on Design, Automation and Test in Europe (DATE'16)},
    year = {2016},
    month = {March},
    url = {https://www.researchgate.net/publication/304010295_Collective_Knowledge_Towards_RD_Sustainability}
}

@inproceedings{cm:29db2248aba45e59:cd11e3a188574d80,
    title = {{Collective Mind, Part II}: Towards Performance- and Cost-Aware Software Engineering as a Natural Science},
    author = {Fursin, Grigori and Memon, Abdul and Guillon, Christophe and Lokhmotov, Anton},
    booktitle = {18th International Workshop on Compilers for Parallel Computing (CPC'15)},
    year = {2015},
    url = {https://arxiv.org/abs/1506.06256},
    month = {January}
}

@inproceedings{Fur2009,
  author =    {Grigori Fursin},
  title =     {{Collective Tuning Initiative}: automating and accelerating development and optimization of computing systems},
  booktitle = {Proceedings of the GCC Developers' Summit},
  year =      {2009},
  month =     {June},
  location =  {Montreal, Canada},
  keys =      {http://www.gccsummit.org/2009}
  url  =      {https://scholar.google.com/citations?view_op=view_citation&hl=en&user=IwcnpkwAAAAJ&cstart=20&citation_for_view=IwcnpkwAAAAJ:8k81kl-MbHgC}
}

Feedback

If you have problems, questions or suggestions, do not hesitate to get in touch via the following mailing lists:

Open Source Agenda is not affiliated with "Ck Crowdtuning" Project. README Source: ctuning/ck-crowdtuning

Open Source Agenda Badge

Open Source Agenda Rating