Adityaguptai Self Driving Car Save

A End to End CNN Model which predicts the steering wheel angle based on the video/image

Project README

Self Driving Car (End to End CNN/Dave-2)

alt img
Refer the Self Driving Car Notebook for complete Information

  • Used convolutional neural networks (CNNs) to map the raw pixels from a front-facing camera to the steering commands for a self-driving car. This powerful end-to-end approach means that with minimum training data from humans, the system learns to steer, with or without lane markings, on both local roads and highways. The system can also operate in areas with unclear visual guidance such as parking lots or unpaved roads.
  • The system is trained to automatically learn the internal representations of necessary processing steps, such as detecting useful road features, with only the human steering angle as the training signal. We do not need to explicitly trained it to detect, for example, the outline of roads.
  • End-to-end learning leads to better performance and smaller systems. Better performance results because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e. g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn’t automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps.
  • It is also called as DAVE-2 System by Nvidia

Watch Real Car Running Autonoumously using this Algorithm

A TensorFlow/Keras implementation of this Nvidia paper with some changes.

Conclusions from the paper

  • This demonstrated that CNNs are able to learn the entire task of lane and road following without manual decomposition into road or lane marking detection, semantic abstraction, path planning, and control.The system learns for example to detect the outline of a road without the need of explicit labels during training.
  • A small amount of training data from less than a hundred hours of driving was sufficient to train the car to operate in diverse conditions, on highways, local and residential roads in sunny, cloudy, and rainy conditions.
  • The CNN is able to learn meaningful road features from a very sparse training signal (steering alone).
  • More work is needed to improve the robustness of the network, to find methods to verify the robust- ness, and to improve visualization of the network-internal processing steps.


alt img

How to Use

Download Dataset by Sully Chen: [] Size: 25 minutes = 25{min} x 60{1 min = 60 sec} x 30{fps} = 45,000 images ~ 2.3 GB

Note: You can run without training using the pretrained model if short of compute resources

Use python3 to train the model

Use python3 to run the model on a live webcam feed

Use python3 to run the model on the dataset

To visualize training using Tensorboard use tensorboard --logdir=./logs, then open into your web browser.

Other Larger Datasets you can train on

(1) Udacity:
70 minutes of data ~ 223GB
Format: Image, latitude, longitude, gear, brake, throttle, steering angles and speed
(2) Udacity Dataset: [Datsets ranging from 40 to 183 GB in different conditions]
(3) Dataset [80 GB Uncompressed]
(4) Apollo Dataset with different environment data of road:

Some other State of the Art Implementations


Credits & Inspired By

(2) Research paper: End to End Learning for Self-Driving Cars by Nvidia. []
(3) Nvidia blog:

Open Source Agenda is not affiliated with "Adityaguptai Self Driving Car " Project. README Source: adityaguptai/Self-Driving-Car-
Open Issues
Last Commit
3 years ago

Open Source Agenda Badge

Open Source Agenda Rating