For more information, we can visit the following websits:
|3D LiDAR (not provided)||Ouster OS1-128, 128 channels, 120m range|
|Frame Camera * 2||FILR BFS-U3-31S4C， resolution: 1024 × 768|
|Event Camera * 2||DAVIS346, resolution: 346 × 240，2 built-in imu|
|Ground Truth||Leica BLK 360|
The submission will be ranked based on the completeness and frequency of the trajectory as well as on the position accuracy (ATE). The score is based on the ATE of individual points on the trajectory. Points with the error smaller than a distance threshold are added to your final score. This evaluation scheme is inspired by HILTI Challenge.
Output trajectories should be transformed into the body_imu frame, We will align the trajectory with the dense ground truth points using a rigid transformation. Then the Absolute Trajectory Error (ATE) of a set of discrete point is computed. At each ground truth point, extra penalty points are added to the final score depending on the amount of error at this point:
Each sequence will be evaluated over a maximum of 200 points, which leads to a maximum of $N\times 200$ points being evaluated among $N$ sequences.
Given an example:
Sign up for an account and submit your results in the evaluation system, the live leaderboard will update your ranking.
20220215_canteen_night.txt 20220215_garden_night.txt 20220219_MCR_slow_00.txt 20220226_campus_road_day.txt ....
1644928761.036623716 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ....
Each row contains timestamp_s tx ty tz qx qy qz qw. The timestamps are in the unit of second which are used to establish temporal correspondences with the groundtruth. The first pose should be no later than the starting time specified above, and only poses after the starting time will be used for evaluation.
T_bodyw_body = T_body_sensor * T_sensorw_sensor * T_body_sensor^(-1);.
Do not publicly release your trajectory estimates, as we might re-use some of the datasets for future competitions.
We provide the compressed rosbag data, remember to execute the following command to decompress them.
# example: 20220216_garden_day_ref_compressed rosbag decompress 20220216_garden_day.bag
|body_imu||extrinsics and intrinsics of the STIM300||body_imu.yaml|
|event_cam00||extrinsics and intrinsics of the left event camera||event_cam00.yaml|
|event_cam00_imu||extrinsics and intrinsics of the left event camera imu||event_cam00_imu.yaml|
|event_cam01||extrinsics and intrinsics of the right event camera||event_cam01.yaml|
|event_cam01_imu||extrinsics and intrinsics of the right event camera imu||event_cam01_imu.yaml|
|frame_cam00||extrinsics and intrinsics of the left flir camera||frame_cam00.yaml|
|frame_cam01||extrinsics and intrinsics of the right flir camera||frame_cam01.yaml|
The picture below is a schematic illustration of the reference frames (red = x, green = y, blue = z):
The results submitted by each team will be scored based on the completeness and ATE accuracy of the trajectories. All the results will be displayed in the live leaderboard. Each trajectory will be scored based on the standard evaluation points, the accumulation of the scores of all these evaluation points is normalized to 200 points to get the final score of the sequence. Each evaluation point can get 0-10 points according to its accuracy.
Of course, we will provide the calibration data of IMU and cameras.
We will provide some sample datasets along with their ground truth collected with the same sensor kit, but the ground truth for the challenge sequences is not available. However, you can submit your own results in the website evaluation system for evaluation.
A team can only register one account. Quota can only be obtained by joining the WeChat group.
In order to prevent the problem of a team registering multiple accounts, this competition requires all members of the participating team to join the WeChat group. If the QR code is invalid, we will update it in time. And the old account cannot be used, you need to re-register a new account.
 Jianhao Jiao, Hexiang Wei, Tianshuai Hu, Xiangcheng Hu, etc., Lujia Wang, Ming Liu, FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of Localization and Mapping Accuracy on Diverse Platforms, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, Kyoto, Japan.)
 HILTI Challenge.