Gaze Estimation With Laser Sparking Save

Deep learning based gaze estimation demo with a fun feature :-)

Project README

Gaze Estimation Demo with Sparking Laser Beam ;-)

This program demonstrates how to use the gaze-estimation-adas-0002 model of the OpenVINO Open Model Zoo with Intel(r) Distribution of OpenVINO(tm) toolkit.
This program finds the faces in an image, detect the landmark points on the detected faces to find the eyes, estimate the head rotation angle, and estimates the gaze orientation.
This program draws the gaze lines like the laser beams. Also the program detects the collision of the laser beams and draws sparkles at the crossing point of the laser beams (for fun). The gaze estimation model requires the head rotation angle and the cropped eye images as the input of the model. Therefore, the program uses head-pose-estimation-adas-0001 model to detect the head rotation angles and facial-landmarks-35-adas-0002 model to detects key landmark points on the face. The landmark detection model detects 35 points from a face.

このプログラムはIntel(r) Distribution of OpenVINO(tm) toolkitを使った、OpenVINO Open Model Zoogaze-estimation-adas-0002(視線推定)モデルの使い方を示すためのデモプログラムです。
プログラムはまず入力画像から顔を検出し、その後顔のランドマークポイントを検出し、頭の回転角度を検出し、最後に視線を推定します。
プログラムはレーザービームのように視線を描画します。また、レーザービーム同士が交差した場合、そこにスパークを描画します(遊びです)。
視線推定モデルは入力として頭の回転角度と切り抜いた2つの目の画像を必要とします。そのため、プログラムはhead-pose-estimation-adas-0001モデルを使用して頭の回転角を推定し、facial-landmarks-35-adas-0002モデルで顔のキーランドマークポイント(目や鼻の位置など)を推定しています。ランドマークモデルは1つの顔から35点のキーポイントを検出します。

Gaze Estimation Result

gaze

Required DL Models to Run This Demo

The demo expects the following models in the Intermediate Representation (IR) format:

  • face-detection-adas-0001
  • head-pose-estimation-adas-0001
  • facial-landmarks-35-adas-0002
  • gaze-estimation-adas-0002

You can download these models from OpenVINO Open Model Zoo. In the models.lst is the list of appropriate models for this demo that can be obtained via Model downloader. Please see more information about Model downloader here.

How to Run

0. Prerequisites

  • OpenVINO 2021.3
    • If you haven't installed it, go to the OpenVINO web page and follow the Get Started guide to do it.

1. Install dependencies

The demo depends on:

  • numpy
  • scipy
  • opencv-python

To install all the required Python modules you can use:

(Linux) pip3 install -r requirements.in
(Win10) pip install -r requirements.in

2. Download DL models from OMZ

Use Model Downloader to download the required models.

(Linux) python3 $INTEL_OPENVINO_DIR/deployment_tools/tools/model_downloader/downloader.py --list models.lst
(Win10) python "%INTEL_OPENVINO_DIR%\deployment_tools\tools\model_downloader\downloader.py" --list models.lst

3. Run the demo app

Attach a USB webCam as input of the demo program, then run the program. If you want to use a movie file as an input, you can modify the source code to do it.

Following keys are valid:
'f': Flip image
'l': Laser mode on/off
's': Spark mode on/off
'b': Boundary box on/off

(Linux) python3 gaze-estimation.py
(Win10) python gaze-estimation.py

Demo Output

The application draws the results on the input image.

Tested Environment

  • Windows 10 x64 1909 and Ubuntu 18.04 LTS
  • Intel(r) Distribution of OpenVINO(tm) toolkit 2021.3
  • Python 3.6.5 x64

See Also

Open Source Agenda is not affiliated with "Gaze Estimation With Laser Sparking" Project. README Source: yas-sim/gaze-estimation-with-laser-sparking

Open Source Agenda Badge

Open Source Agenda Rating