Implementation of the KinectFusion approach in modern C++14 and CUDA
This is an implementation of KinectFusion, based on Newcombe, Richard A., et al. KinectFusion: Real-time dense surface mapping and tracking. It makes heavy use of graphics hardware and thus allows for real-time fusion of depth image scans. Furthermore, exporting of the resulting fused volume is possible either as a pointcloud or a dense surface mesh.
#include <kinectfusion.h>
// Define the data source
XtionCamera camera {};
// Get a global configuration (comes with default values) and adjust some parameters
kinectfusion::GlobalConfiguration configuration;
configuration.voxel_scale = 2.f;
configuration.init_depth = 700.f;
configuration.distance_threshold = 10.f;
configuration.angle_threshold = 20.f;
// Create a KinectFusion pipeline with the camera intrinsics and the global configuration
kinectfusion::Pipeline pipeline { camera.get_parameters(), configuration };
// Then, just loop over the incoming frames
while ( !end ) {
// 1) Grab a frame from the data source
InputFrame frame = camera.grab_frame();
// 2) Have the pipeline fuse it into the global volume
bool success = pipeline.process_frame(frame.depth_map, frame.color_map);
if (!success)
std::cout << "Frame could not be processed" << std::endl;
}
// Retrieve camera poses
auto poses = pipeline.get_poses();
// Export surface mesh
auto mesh = pipeline.extract_mesh();
kinectfusion::export_ply("data/mesh.ply", mesh);
// Export pointcloud
auto pointcloud = pipeline.extract_pointcloud();
kinectfusion::export_ply("data/pointcloud.ply", pointcloud);
For a more in-depth example and implementations of the data sources, have a look at the KinectFusionApp.
This library is licensed under MIT.