Robot Operating System framework for autonomous landing of miniature UAV in non-GPS mode.
Robot Operating System (ROS) framework for autonomous landing of a UAV on the target using onboard control. GPS based navigation systems are unsuitable for precision task such as landing therefore computer vision techniques have been used to detect the target accurately and estimate the distance.
The drone used is DJI-Matrice. NVIDIA Jetson-TK1 is used for the onboard control along with a usb camera.
This includes preparing the Jetson TK1 board, activating the drone and setting up a serial communication between them.
This ROS example implements functionality of the DJI Onboard-SDK. It consists of the core library and client packages demonstrating communication with Matrice 100 and A3 flight controllers. Clone the repository into your system and catkin-make to compile the libraries. Packages included are:
Apart from the DJI Onboard SDK packages, two addtional packages have been included which are explained below.
This package is responsible for detection of the target around the UAV and estimating its distance and generating control signals accordingly to maneuver the drone to the location precisely. This package has been divided into two parts:
SIFT features
This Python file contains the algorithm to detect the target in the image acquired from camera and estimate the distance from the drone. The algorithm is based on the SIFT feauture extraction method included in the OpenCV library. When applied on an image, this method identifies and stores the keypoints in the input image. Each keypoint is assigned a 128 elements long unique descriptor vector. For example here keypoints are extracted in this image which is used as the landing mark for the drone. This image will play the role of the training image.
Now features of the input image from the camera of the drone are extracted and stored.
Finally the features from both the images (input image used here is different from the one used above) are matched and the target is identified. This feature matching is done through a Euclidean-distance based nearest neighbor approach.
Now when the target has been identified in the input image, distance can be estimated using the basic principles of cartesian geometry.
Maneuvering the drone
After calculating the distance in the x and y directions, task is to maneuver the drone to the desired location and land safely. This Python file uses the pre-defined libraries of DJI Onboard SDK to perform mentioned tasks. The distance in the x and y directions calculated above are used here to guide the drone in attitude mode.
It is required to get the image from the camera mounted on the drone and give it to the vision algorithm so that it could calculate the distance of the target from the drone. For this purpose a combination of packages is used. To convert the image from the camera into the format compatible with ROS (ROS image) usb_cam package is used.
sudo apt-get install ros-indigo-usb-cam
To use the ROS image with the OpenCV cv_bridge library has to be included into the manifest file of the rospy package. Here is an example that listens to a ROS image message topic, converts the images into an IplImage, draws a circle on it and displays the image using OpenCV. The image is then republished over ROS.
This python code implements this and converts the image from camera into ROS image which is further converted into OpenCV image and given to the vision algorithm.
The above framework was experimented at a height of 3 and 5 meters above the ground and worked well except when wind currents were strong which gave error due to small size of the landing platform (A4 sheet). DJI onboard SDK provides control over drone in both GPS and non-GPS modes and can be used according to the scenario in which drone is to be used. Have a look at the dji_sdk_demo to get the feel of all the control commands available.