DeepWay.v2 Save

Autonomous navigation for blind people

Project README

DEEPWAY V2

Autonomous navigation for blind people. This project is version 2 of deepWay. You can have a look at this video

Before proceeding look at demo of version2 at

DeepWay v.2

A question you may have in mind

If I already had a repository, why make another ?

Since V1 was based on keras, and I don't like tensorflow much, so for more controll I have shifted to pytorch. It is complete redesign.

How is it better than others:

  1. Cost effective: I made the entire project in less than RS 10000 which less than $200.
  2. Blind people generally develop other senses like hearing very well. Taking away one of their senses by using earphones would not have been nice so I am providing information to the blind person using haptic feedback.
  3. Everything runs on a edge device--> Nvidiai Jetson Nano.

Hardware requirements

  1. Nvidia Jetson Nano.
  2. Arduino nano.
  3. 2 servo motors.
  4. USB audio adapter(as jetson nano does not have a audio jack)
  5. Ethernet cable
  6. Webcamera
  7. Power adapter for nvidia jetson nano
  8. 3D printer.(Not necessary)
  9. A latop(Nvidia GPU preferred) or any cloud service provider.

Software requirements(If running on Laptop)

  1. Ubuntu machine(16.04 preferred).
  2. Install anaconda.
  3. Install the required dependencies. Some libraries like pytorch, opencv would require a little extra attention.

conda env create -f deepWay.yml

  1. You can not clone the repository.
  2. Change the COM number in the arduno.py file according to your system.
  3. Connect the Ardunio nano and USB audio adapter to your PC.
  4. Change CAM to video path instead of 0 for running the system on video.
  5. Compile and run arduino Nano code in the arduino nano.
  6. Run blindrunner.py

Software Requirements(Jetson nano)

  1. Follow these instructions for starting up with Jetson nano.
  2. For connecting headless with jetson nano(using ethernet cable).
ifconfig
Check inet addresss
nmap -sn inet_address/24 --> will return live ip address.
ssh machine_name@ip
Enter password
Now you can connect switch on desktop sharing
Now connect to jetson using Reminna.

  1. Now install all the required dependicies(it is a time comsuming task, don't loose hope).

1. Collecting dataSet and Generating image masks.

I made videos of roads and converted those videos to jpg's. This way I collected a dataSet of approximately 10000 images.I collected images from left, right and center view(So automatically labelled). e.g:

For Unet, I had to create binary masks for the input data, I used LabelBox for generating binary masks. (This took a looooooooot of time). A sample is as follows->

For downloading the labelled data from Labelbox, I have made a small utility named "downloader.py"

2. Model training

I trained a lane detection model which would predict the lane(left,center,right) I am walking in. The loss vs iteration curve is as follows:

I trained a U-Net based model for road segmentation on Azure. The loss(pink:traning, green:validation) vs iterations curve is as follows.

though the loss is less the model does not perform well
I trained a model in keras with a different architecture performs really well Loss vs iterations curve is:

3. 3D modelling and printing

My friend Sangam Kumar Padhi helped me with CAD model. You can look at it here

4. Electronics on the spectacles

The electronics on the spectacles are very easy. It is just two servo motors connected with a ardunio nano. The arduino nano receives signal from the jetson(using pyserial library), and Arduino Nano controls the servo motors.

5. Pedestrian detection using Mobilenet V1 SSD

I am using Hao repository for pedestrian detection . It runs at approx 10 FPS(individulaly) on the jetson nano and the accuracy is also pretty good.

Results

  1. Model for lane detection works really well, it runs at approx 25 fps on the jetson nano. I think it is the really good for an 30 FPS camera.
  2. The road segmentation model does not work as good as the lane detection one. Though the loss decreases very much but still the output is not as expected.@ptrblck suggests to use focal loss or weighted loss.
  3. I trained another model using a different unet architecture in keras and it performs really well.
  4. I am doing naive approach for path planning right now. Assumption: Only people will be on the streets.
  5. For pedestrian detection, I am using Mobilenet V1 SSD. Thanks to Hao. It runs at 5FPS. I tried to run object detection models in jetson-inference. If runs at approx 15 FPS, but I was not able to capture frames using opencv while gstreamer was also capturing frames.
  6. To cope up with the slow frame-rate of Mobilenet, I combined it with object tracking. Object detection ran once in 3 seconds to re seed the object-tracker.
  7. Overall the system runs at 3 FPS. I am running my nano at 5W with a usb type B power supply of 5V 2 Amp. Running the jetson in 10W mode using a 5V 4A supply would further improve performance.

The project is complete from my side, but there is other functionality I desire to add in future

TODO

  • Collect training data.
  • Train a lane detection model.
  • Add servo motors feedback support.
  • Add sound support.
  • 3D print the spectacles.
  • Train U-Net for doing a lot of other stuff(like path planing).
  • Improve U-Net accuracy.(The loss is very low, but the model does not come up to my expectations)
  • Drawing Lanes(Depends upon the improving unet accuracy)
  • Improving lane detection by taking averages of lane positions.
  • Pedestrain Detection with tracking for more fps.
  • Improving speed of pedestrian detection by using tracking instead of detection.
  • Try to run optimized models on jetson(Available in jetson-inference)
  • Optimizing everything to run even faster on Jetson nano.(conversion of models into Tensorrt).
  • Adding G.P.S. support for better navigation.
  • Adding path planning.
  • Adding face recognition support(I have a face_recognition repository, so most of the work is done, but I think face recognition should be added after we perfect the navigation part.)

People to Thank

  1. Army Institute of Technology (My college).
  2. Prof. Avinash Patil,Sangam Kumar Padhi, Sahil and Priyanshu for 3D modelling and printing.
  3. Shivam sharma and Arpit for data labelling.
  4. Nvidia for providing a free jetson kit.
  5. LabelBox: For providing me with the free license of their Amazing Prodcut.

References

  1. Pytorch
  2. PyimageSearch
  3. Pytorch Community--> special mention @ptrblck
  4. AWS
  5. U-Net
  6. U-Net implementation(usuyama)
  7. U-Net implementation(Heet Sankesara)
  8. Hao pytorch-ssd
  9. Jetson-hacks
  10. Tensorflow
  11. Keras
  12. Advanced lane detection-Eddie Forson

Citations

Labelbox, "Labelbox," Online, 2019. [Online]. Available: https://labelbox.com

Liked it

Tell me if you liked it by giving a star. Also check out my other repositories, I always make cool stuff. I even have a youtube channel "reactor science" where I post all my work.

Read about v1 at:

  1. Geospatial Magazine
  2. Hackster
  3. Anyline
  4. Arduino blog
Open Source Agenda is not affiliated with "DeepWay.v2" Project. README Source: satinder147/DeepWay.v2

Open Source Agenda Badge

Open Source Agenda Rating