An open source platform for visual-inertial navigation research.
This has been one of the larger releases with significant changes to the propagation to support IMU intrinsics and analytical integration. The documentation has been split into the original discrete derivations and analytical with IMU intrinsics to hopefully allow for comparison and learning between these two.
The IMU intrinsics code was released in relation to the following publication:
Yang, Yulin, Patrick Geneva, Xingxing Zuo, and Guoquan Huang. "Online Self-Calibration for Visual-Inertial Navigation: Models, Analysis and Degeneracy." IEEE Transactions on Robotics, 2023. https://pgeneva.com/downloads/preprints/Yang2023TRO.pdf
Example fixed masks in stereo tracking:
https://github.com/rpng/open_vins/assets/2222562/bd6f6927-96e6-4275-8ce8-93be25d00da4
In general a bunch of smaller changes and bug fixes. Thanks to all the contributors that opened issues and PRs to address the problems. We also have recently released a monocular plane-aided VINS, termed ov_plane, which leverages the OpenVINS project. We hope that those interested can check it out.
Full Changelog: https://github.com/rpng/open_vins/compare/v2.6.2...v2.6.3
rclcpp::SensorDataQoS()
multi_threading
config value)SimulatorInit
to prevent name conflictsDynamic initialization is an implementation based on:
Dong-Si, Tue-Cuong, and Anastasios I. Mourikis. "Estimator initialization in vision-aided inertial navigation with unknown camera-IMU calibration." 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012.
Demo Youtube Video: https://www.youtube.com/watch?v=eSQLWcNrx_I
https://user-images.githubusercontent.com/2222562/159347591-97b3f334-7b8b-4ff8-b41d-875aa04d2544.mp4
I ran a few runs on the ETH eurocmav dataset to compare performance vs previous releases. While there is no guarantee, I believe we don't have any major regressions in accuracy. Default levels of computational cost have gone up due to tracking of more points to have denser pointclouds. This can be tuned as necessary for any specific platform.
We have also released a package to facilitate the generation of groundtruth trajectories.
This utility was created to generate groundtruth trajectories using a motion capture system (e.g. Vicon or OptiTrack) for use in evaluating visual-inertial estimation systems. Specifically we want to calculate the inertial IMU state (full 15 dof) at camera frequency rate and generate a groundtruth trajectory similar to those provided by the EurocMav datasets.
Please check it out here: https://github.com/rpng/vicon2gt
Bug Fixes:
Key Changes / New Features:
Minor changes: