Use a robot arm (Baxter) mounted with a depth camera to scan an object's 3D model.
This is part of my Winter Project at NWU.
Videos of scanning a bottle and a meter are put here and here.
An animation is shown below:
Contents:
Before 3D scanning:
During 3D scanning:
I wrote 1 ROS node for calibration and 3 ROS nodes for 3D scanning.
Use ↓ to run calib_camera_pose.py for calibration:
$ roslaunch scan3d_by_baxter calib_camera_pose.launch
Use ↓ to start the 3D scanner:
$ roslaunch scan3d_by_baxter main_3d_scanner.launch
Details of calibration is described in the "Algorithm" section.
The workflow of the main 3 nodes are described below.
file: src_main/n1_move_baxter.py
The node controls Baxter to move to several positions in order to take pictures of object from different views. After Baxter reaches the next goal pose, publish a message of depth_camera's pose to node 2.
Input: (a) config/T_arm_to_depth.txt; (b) Hard-coded Baxter poses.
Publish: Message to node 2.
file: src_main/n2_filt_and_seg_object.cpp
The node subscribes the (a) original point cloud, (b) rotated it to the Baxter's frame, (c) filter and segment the cloud to remove points that are noises or far away from the object. The result is then published to node 3.
Subscribe: (a) Message from node 1. (b) Point cloud from depth camera.
Publish: (b) Rotated cloud to rviz. (c) Segmented cloud to node 3.
Meanwhile, it also saves the (a) orignal cloud and (c) segmented cloud to the data/data/ folder.
file: src_main/n3_register_clouds_to_object.py
This node subsribes the segmented cloud (which contains the object) from node2. Then it does a registration to obtain the complete 3D model of the object. Finally, the result is saved to file.
Subscribe: Segmented cloud from node2.
Publish: Registration result to rviz.
file: src_main/n1_fake_data_publisher.cpp
This is a replacement for "n1_move_baxter.py" and is used for debug. It reads the Baxter poses and point cloud from file, and publishes them to node 2. Thus, I can test previously saved data, and debug without connecting to any hardware.
Locate chessboard:
Detect chessboard corners in the image by Harris. Obtain and refine these corners' position (u,v). Since the corners are also in chessboard frame with a coord value of (x=i, y=j, z=0), we can solve this PnP problem and get the relative pose between chessboard and camera frame. (The above operations are achieved in Python using OpenCV library.)
Chessboard has a grid size of 7x9. Let 7 side be X-axis, and 9 be Y-axis. Now there are 4 possible chessboard frames.
I do 2 things to obtain a unique chessboard frame:
Also, I draw the obtained chessboard coordinate onto the 2D image and display it for easier debug.
Transform Pose to Baxter Frame:
Poses of Baxter's frames can be read from '/tf' topic (forward kinematics).
With some geometrical computation, I can know where the depth_camera and chessboard is in Baxter frame.
Procedures of calibration:
Face the depth_camera and Baxter left hand camera towards the chessboard. The program reads the '/camera_info' topic, detect and locate the chessboard, and draw out the detected coordinate and xyz-axis. If the detection is good, I'll press a key 'd' and obtain the calibration result, which is then saved to /config folder.
Main functions are defined in lib_cam_calib.py and lib_geo_trans.py.
After acquiring the point cloud, I do following processes (by using PCL's library):
Functions are declared in pcl_filters.h and pcl_advanced.h.
Do a global registration and an ICP registration. Then, I use a range filter to retain only the target object.
The colored ICP was not used, because the light changes as robot arm moves to different locations, which makes the color different.
Functions are defined in lib_cloud_registration.py.
launch/
Launch files, including scripts for starting my 3D scanner and debug and test files. All my ROS nodes have at least one corresponding launch file stored here.
config/: configuration files:
include/: c++ header files. Mainly wrapped up PCL filters.
src_cpp/: c++ function definitions.
src_python/: Python functions. Test cases are also included in each script.
src_main/
Main scripts for this 3D scan project.
test/: Testing cpp functions.
test_ros/: Testing ROS scripts, including:
Below are the dependencies. Some I give the installing instructions. Others please refer to their official website.
In short: Eigen, PCL, Open3D, OpenCV(Python) and other ROS packages.
Eigen 3
http://eigen.tuxfamily.org/index.php?title=Main_Page
$ sudo apt-get install libeigen3-dev
(Eigen only has header files. No ".so" or ".a".)
pcl 1.7
$ sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl
$ sudo apt-get update
$ sudo apt-get install libpcl-all
pcl_ros
http://wiki.ros.org/pcl_ros
$ sudo apt-get install ros-melodic-pcl-ros
$ sudo apt-get install
$ ros-melodic-pcl-conversions
(The above two might already been installed.)
Baxter's packages
Please follow the long tutorial on Rethink Robotics' website
OpenCV (C++)
OpenCV >= 3.0
I used it for some cv::Mat operations.
This is really nice tutorial for install OpenCV 4.0: link.
Some other libs that might be usefll
octomap
$ sudo apt-get install doxygen # a denpendency of octomap
$ sudo apt install octovis # for octomap visualization
After install above dependencies, clone from https://github.com/OctoMap/octomap, and then "make install" it.
The current implementation has following problems:
A 3d scanner project from last NWU MSR cohort:
https://github.com/zjudmd1015/Mini-3D-Scanner.
It's a really good example for me to get started. Algorithms I used for point cloud processing are almost the same as this.
PCL and Open3D.
My codes for the algorithms are basically copied piece by piece from their official tutorial examples.
https://github.com/karaage0703/open3d_ros
I referenced this github repo for point_cloud datatype between open3d and ros. It's useful for me to understand the ROS PointCloud2 datatype, but its code and documentation does have some limitations. Acutually I rewrote the related functions. See this Github Repo.