Udacity Flying Car Nanodegree - Term 1 - Project 2 - 3D Motion Planning
Udacity Flying Car Nanodegree - Term 1 - Project 2 - 3D Motion Planning
This is the second project on Udacity's Flying Car Nanodegree. It consists of planning and executing a trajectory of a drone in an urban environment. Built on top of the event-based strategy utilized on the first project, the complexity of path planning in a 3D environment is explored. The code communicates with Udacity FCND Simulator using Udacidrone API.
To run this project, you need to have the following software installed:
~/*conda*
directory from my home and then install it with bash Miniconda3-latest-MacOSX-x86_64.sh -b
.The following are the main code used on the project:
Here are some examples of trajectories found with this code:
It is interesting to see the how the execution time, cost and waypoint count change with each variation of the algorithm. The following figures show those parameters for the four goals used:
Videos of the drone flying in the simulator:
In addition to that implementation using a grid, the following code use a graph to search for the path to the goal:
Here are some examples of trajectories found/no found in this code:
It is interesting to see how much faster this algorithm is compared to an A* on a grid. It is also interesting to see the path was not found in the upper-right position in this case. Another characteristic is waypoint count, in this case, was higher than the one found with A* on a grid.
Videos of the drones flying in the simulator with this trajectories:
To run the code you need to change to the repo directory and create the conda environment with the following command:
conda env create -f environment.yml
Note: This environment configuration is provided by Udacity at the FCND Term 1 Starter Kit repo.
Activate your environment with the following command:
source activate fcnd
Start the drone simulator. You will see something similar to the following image:
Select the Motion Planning project, and you will get to the following environment:
Now is time to run the code, for the A* grid implementation:
python motion_planning.py --goal_lon -122.40195876 --goal_lat 37.79673913 --goal_alt -0.147
For the graph implementation:
python graph_motion_planning.py --goal_lon -122.40195876 --goal_lat 37.79673913 --goal_alt -0.147
There are examples for different goal coordinates on the following two .sh:
You're reading it! Below I describe how I addressed each rubric point and where in my code each point is handled.
motion_planning.py
is a modified version of backyard_flyer_solution.py
for simple path planning. Verify that both scripts work. Then, compare them side by side and describe in words how each of the modifications implemented in motion_planning.py
is functioning.Both version are similar in the sense they implement a finite-state machine to command the drone. They are similar but not exactly the same. On the backyard_flyer_solution.py
the states and transitions represented are:
The state machine implemented on motion_planning.py
, adds another state to the previous one:
There is a new state, PLANNING
, between ARMING
and TAKEOFF
. When the drone is at the state ARMING
and it is actually armed (line 66) on the state_callback
method (lines 61 to 72), the transition to PLANNING
is executed on the method plan_path
. This method responsibility is to calculate the waypoints necessary for the drone to arrive at its destination.
On the plan_path
method:
create_grid
from the module planning_utils.py
.a_star
method from the module planning_utils.py
.send_waypoints
at line 161.colliders.csv.
file and set that position as global home (self.set_home_position()
)The home position is read at motion_planning.py line 124. It use the function read_home
added to planning_utils.py
.
self._latitude()
, self._longitude()
and self._altitude()
. Then use the utility function global_to_local()
to convert to local position (using self.global_home()
as well, which you just set)This coordinates transformation is done at line 130.
The grid star point is calculated from line 144 to 146.
Three new parameters were added to the motion_planning.py to accept goals coordinates. The coordinates are converted to local coordinates at lines 151 to 152 to be used on the search algorithm.
The diagonals movements were implemented by adding them to the Action enum. The valid_actions method was modified to take those actions into account. Here is an example of the A* trajectories on a grid:
When the diagonal actions are implemented, the trajectories to the same goals changed:
The path was pruned at line 162 using collinearity(collinearity_prune function) with the method provided by the lectures. The trajectories after this transformation are:
The following are links to videos directing the drone to different locations: