Creepy stalking tool to process security camera motion triggered images and sort seen objects in different categories, detect license plates and faces. Has PWA ready web front end. Meant to make property monitoring faster without need to watch video recordings.
⚠️ Notice: due to lack of my own interest towards this project, I have decided to archive this project ⚠️
Open Intelligence processes any camera motion triggered images and sorts seen objects using Yolo, it provides easy to use front end web interface with rich features so that you can have up to date intel what is the current status on your property. Open Intelligence uses license plate detection (ALPR) to detect vehicle plates and face detection to detect people faces which then can be sorted into person folders and then can be trained so that Open Intelligence can try to identify seen people. All this can be done from front end interface.
Open Intelligence uses super resolution neural network to process super resolution images for improved license plate detection.
Project goal is to be useful information gathering tool to provide data for easy property monitoring without need for expensive camera systems because any existing cameras are suitable.
I developed this to my own use because were tired to use existing monitoring tools to go through recorded video. I wanted to know what has been happening quickly.
Open Intelligence is meant to be run with Docker.
Click below to watch promo video
Cameras view | Plate calendar |
---|---|
Face wall | Face wall source dialog |
---|---|
Everything can be installed on one server or to separate servers meaning that database is at server one, python application at server two and api hosting at server three. Processes must have access to output storage containing processed image result files.
This section has step-by-step installation tutorial to get started with Docker based installation. Docker way of running Open-Intelligence is not going to limit you for only Docker, you can still run for example api, front-end, app.py, similarity processes with Docker and have separate GPU enabled machine for super resolution and insightface processes.
Download PostgreSQL and install it on any machine you want on your network. Look over with search engine how to make it available on local network devices if you are installing it on a different machine than what is running Docker images.
If you don't already have, go to https://docs.docker.com/get-docker/ and follow their instructions.
If you don't already have, go to https://docs.docker.com/compose/install// and follow their instructions.
Use any version control tool to clone this repository code. I recommend using any git based tool so that you can checkout latest code and by recommend, I mean don't download it as "offline" zip file.
Using git from shell:
git clone https://github.com/norkator/open-intelligence.git
Get models here https://drive.google.com/file/d/1dSJuxpwSFfF7SIJg8NMKG5yCIG9CHQKw/view?usp=sharing
unzip models into open intelligence /python/models
folder.
docker-compose.yml_tpl
into docker-compose.yml
and fill in your environment variables which pretty much
are database configuration.https://github.com/norkator/open-intelligence/wiki/Linux-notes#ensure-your-timezone-is-right
On linux machine configuring storage steps are:
Open docker-compose.yml and tweak all volume configs
volumes:
- ./python/:/app
- /Users/<user-name>/Desktop/camera_root:/input
- /Users/<user-name>/Desktop/output:/output_test
where first part /Users/<user-name>/Desktop/camera_root
is your path to your host machine folder.
Same with another output folder path. Its just some folder in your machine where you want all open intelligence
output files to be stored.
That latter part like :/input
and :/output_test
is how python process sees paths in container side.
It does not affect any way to your actual mounted folders so better not change these.
More on this article: https://github.com/norkator/open-intelligence/wiki/Configuring-storage
docker-compose up
in root of this project and let magic happen.http://localhost:3000/
and you should see Open Intelligence front page. Hopefully.docker-compose up
again so that changes takes effect in python side.Run at root of this project:
git fetch
git pull
docker-compose build
docker-compose up
Overall process among different python processes for Open Intelligence.
Default folders
.
├── api # Backend service for frontend webpage
├── docs # Documents folder containing images and drawings
├── front-end # Web interface for this project
├── python # Python backend applications doing heavy lifting
├── classifiers # Classifiers for different detectors like faces
├── libraries # Modified third party libraries
├── models # Yolo and other detector model files
├── module # Python side application logic, source files
├── objects # Base objects for internal logic
├── scripts # Scripts to ease things
This part is explaining in better detail what each of base python app scripts is meant for. Many tasks are separated for each part. App.py is always main process, the first thing that sees images.
App.py
StreamGrab.py
SuperResolution.py
python SuperResolutionTest.py --testfile="some_file.jpg"
which will load image by given name
from /images
folder.InsightFace.py
SimilarityProcess.py
This is no longer recommended. Using docker method is much better.
Terminology for words like API side and Python side:
/api
folder containing node api process intelligence.js
./python
folder containing different python processes.See Project folder structure for more details about folders.
/api
folder and run npm install
.env_tpl
to .env
and fill details.intelligence-tasks.js
or with PM2 process manager pm2 start intelligence-tasks.js
.node intelligence.js
or with PM2 process manager pm2 start intelligence.js -i 2
./front-end
folder and rename .env_tpl
to .env
./front-end
run npm start
so you have both api and front end running.localhost:3000
if react app doesn't open browser window automatically./front-end
.env
REACT_APP_API_BASE_URL that it corresponds your machine ip address where node js api is running.npm run build
/build
folder contents somewhere to serve build webpage if you want.(Windows)
/python
folder.\venv\Scripts\activate.bat
pip install -r requirements_windows.txt
Microsoft Visual C++ 2015 Redistributable (x64)
installed.
Python Apps
section.(Linux)
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get install python3.6
virtualenv --python=/usr/bin/python3.6 ./
source ./bin/activate
pip install -r requirements_linux.txt
Python Apps
section.Multi node support requires little more work to configure, but it's doable. Follow instructions below.
config_slave.ini
from template config_slave.ini.tpl
App.py
and other files.App.py
via giving argument: \.App.py --bool_slave_node True
Cuda only works with some processes like super resolution and insightface. Requirements are:
All datetime fields are inserted without timezone so that:
File : 2020-01-03 08:51:43
Database : 2020-01-03 06:51:43.000000
Database timestamps are shifted on use based on local time offset.
These notes are for Windows. Current Docker way makes this installation automatic.
Got it running with following works. Downloaded 2.3.0
release from here https://github.com/openalpr/openalpr/releases
openalpr-2.3.0-win-64bit.zip
to /libraries
folderSource code(zip)
src/bindings/python
python setup.py install
build/lib
moved contents to project libraries/openalpr_64/openalpr
folder.from libraries.openalpr_64.openalpr import Alpr
Now works without any python site-package installation.
There is a separate readme file for this side so see more at ./front-end/README.md
Refer to troubleshooting wiki.
Note that /libraries
folder has Python applications made by other people. I have needed to make small changes to them,
that's why those are included here.
See LICENSE file.