Python (Pytorch) and Matlab (MatConvNet) implementations of CVPR 2021 Image Matching Workshop paper DFM: A Performance Baseline for Deep Feature Matching
Python (Pytorch) and Matlab (MatConvNet) implementations of our paper DFM: A Performance Baseline for Deep Feature Matching at CVPR 2021 Image Matching Workshop.
Paper (CVF) | Paper (arXiv)
Presentation (live) | Presentation (recording)
We strongly recommend using Anaconda. Open a terminal in ./python folder, and simply run the following lines to create the environment:
conda env create -f environment.yml
conda activate dfm
Dependencies
If you do not use conda, DFM needs the following dependencies:
(Versions are not strict; however, we have tried DFM with these specific versions.)
Now you are ready to test DFM by the following command:
python dfm.py --input_pairs image_pairs.txt
You should make the image_pairs.txt file as following:
<path_of_image1A> <path_of_image1B>
<path_of_image2A> <path_of_image2B>
.
.
.
<path_of_imagenA> <path_of_imagenB>
If you want to run DFM with a specific configuration, you can make changes to the following arguments in config.yml:
You can use our Image Matching Evaluation (IME) repository, in which we have support to evaluate DFM and 8 additional algorithms which are SIFT, SURF, ORB, KAZE, AKAZE, SuperPoint, SuperGlue and Patch2Pix on HPatches dataset. Also, you can use our Matlab implementation (see For Matlab Users section) to reproduce the results presented in the paper.
To reproduce our results given in the paper, use our Matlab implementation.
You can get more accurate results (but with fewer features) using Python implementation. It is mainly because MATLAB’s matchFeatures function does not execute ratio test in a bidirectional way, where our Python implementation performs bidirectional ratio test. Nevertheless, we made bidirectionality adjustable in our Python implementation as well.
We have implemented and tested DFM on MATLAB R2017b.
You need to install MatConvNet (we have support for matconvnet-1.0-beta24). Follow the instructions on the official website.
Once you finished the installation of MatConvNet, you should download pretratined VGG-19 network to the ./matlab/models folder.
Now, you are ready to try DFM!
Just open and run main_DFM.m with your own images.
Download HPatches sequences and extract it to ./matlab/data folder.
Run main_hpatches.m which is in ./matlab/HPatches Evaluation folder.
A results.txt file will be generetad in ./matlab/results/HPatches folder.
Please cite our paper if you use the code:
@InProceedings{Efe_2021_CVPR,
author = {Efe, Ufuk and Ince, Kutalmis Gokalp and Alatan, Aydin},
title = {DFM: A Performance Baseline for Deep Feature Matching},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2021},
pages = {4284-4293}
}