Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but so far there has not been a lot of empirical attempts at comparing approaches in an objective way.
This project contains some tools to benchmark various implementations of approximate nearest neighbor (ANN) search for different metrics. We have pregenerated datasets (in HDF5) formats and we also have Docker containers for each algorithm. There's a test suite that makes sure every algorithm works.
We have a number of precomputed data sets for this. All data sets are pre-split into train/test and come with ground truth data in the form of the top 100 neighbors. We store them in a HDF5 format:
|Dataset||Dimensions||Train size||Test size||Neighbors||Distance||Download|
Interactive plots can be found at http://ann-benchmarks.com. These are all as of December 2021, running all benchmarks on a r5.4xlarge machine on AWS with
The only prerequisite is Python (tested with 3.6) and Docker.
pip install -r requirements.txt.
python install.pyto build all the libraries inside Docker containers (this can take a while, like 10-30 minutes).
python run.py(this can take an extremely long time, potentially days)
python create_website.pyto plot results.
python data_export.py --out res.csvto export all results into a csv file for additional post-processing.
You can customize the algorithms and datasets if you want to:
algos.yamlcontains the parameter settings that you want to test
python run.py --dataset glove-100-angular. See
python run.py --helpfor more information on possible settings. Note that experiments can take a long time.
python plot.py --dataset glove-100-angularor
python create_website.py. An example call:
python create_website.py --plottype recall/time --latex --scatter --outputdir website/.
ann_benchmarks/algorithmsby providing a small Python wrapper.
plot.pyto enable batch mode.
The following publication details design principles behind the benchmarking framework: