MATLAB Implementation of Visual Odometry using SOFT algorithm
This repository is a MATLAB implementation of the Stereo Odometry based on careful Feature selection and Tracking. The code is released under MIT License.
The code has been tested on MATLAB R2018a and depends on the following toolboxes:
On a laptop with Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
and 16GB RAM
, the following average timings were observed:
git clone https://github.com/Mayankm96/Stereo-Odometry-SOFT.git
data
. In case you wish to use the KITTI Dataset, such as the Residential dataset, the following command might be useful:cd PATH/TO/Stereo-Odometry-SOFT
## For Reseidential Sequence: 61 (2011_09_46_drive_0061)
# synced+rectified data
wget -c https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_drive_0009/2011_09_26_drive_0009_sync.zip -P data
# calib.txt
wget -c https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_26_calib.zip -P data
Change the corresponding paramters in the configuration file configFile.m
according to your need
Run the script main.m
to get a plot of the estimated odometry
The input to the algorithm at each time frame, are the left and right images at the current time instant, and the ones at the previous timestamp.
In this section, the keypoints detection and matching is divided into following separate stages:
Corner and blob features are extracted for each image using the following steps:
Efficient Non-Maximum Suppression is applied on the filter responses to produce keypoints that may belong to either of the following classes: blob maxima, blob minima, corner maxima, and corner minima. To speed up the feature matching, correspondences are only found between features belong to the same class.
The feature descriptors are constructed by using a set of 16 locations out of an 11 x 11 block around each keypoint in input image's gradients. The gradient images are computed by convolving 5 x 5 sobel kernels across the input image. The descriptor has a total length of 32,
This part of the algorithm is concerned with finding the features for egomotion estimation. It is based on the process mentioned in the paper StereoScan: Dense 3d reconstruction in real-time. The process can be summarized as follows:
Correspondences in two images are found by computing the Sum of Absolute Differences (SAD) score between a feature in the first image with the one lying in the second image that belongs to the same class
This matching is done in a circular fasion between the left and right frames at time instants t-1 and t as shown below:
To ensure a uniform distribution of features across the image, the entire image is divided into buckets of size 50 x 50 pixels and feature selection is done to select only the strongest features present in each bucket.
Using P3P algorithm along with RANSAC, incremental rotation and translation is estimated.