The first real-world adversarial attack on MTCNN face detetction system to date
By Edgar Kaziakhmedov, Klim Kireev, Grigorii Melnikov, Mikhail Pautov and Aleksandr Petiushko
This is the code for the research article. The video is available here.
Recent studies proved that deep learning approaches achieve remarkable results on face detection task. On the other hand, the advances gave rise to a new problem associated with the security of the deep convolutional neural network models unveiling potential risks of DCNNs based applications. Even minor input changes in the digital domain can result in the network being fooled. It was shown then that some deep learning-based face detectors are prone to adversarial attacks not only in a digital domain but also in the real world. In the paper, we investigate the security of the well-known cascade CNN face detection system - MTCNN and introduce an easily reproducible and a robust way to attack it. We propose different face attributes printed on an ordinary white and black printer and attached either to the medical face mask or to the face directly. Our approach is capable of breaking the MTCNN detector in a real-world scenario.
The repository is organized as follows:
The attack is implemented in adversarial_gen.py source file, in order to train the patches follow the guideline:
The rest of the code is well-documented.
NOTE: paste yout own TensofFlow implementation of resize_area_batch function (INTER_AREA resize algorithm) OR use this one
@article{kaziakhmedov2019real,
title={Real-world attack on MTCNN face detection system},
author={Kaziakhmedov, Edgar and Kireev, Klim and Melnikov, Grigorii and Pautov, Mikhail and Petiushko, Aleksandr},
journal={arXiv preprint arXiv:1910.06261},
year={2019}
}
This project is licensed under the MIT License - see the LICENSE file for details.