[ECCV 2022] TAFIM: Targeted Adversarial Attacks against Facial Image Manipulation
Targeted Adversarial Attacks for Facial Forgery Detection
Shivangi Aneja, Lev Markhasin, Matthias Niessner
https://shivangi-aneja.github.io/projects/tafim
Abstract: Face manipulation methods can be misused to affect an individual’s privacy or to spread disinformation. To this end, we introduce a novel data-driven approach that produces image-specific perturbations which are embedded in the original images. The key idea is that these protected images prevent face manipulation by causing the manipulation model to produce a predefined manipulation target (uniformly colored output image in our case) instead of the actual manipulation. In addition, we propose to leverage differentiable compression approximation, hence making generated perturbations robust to common image compression. In order to prevent against multiple manipulation methods simultaneously, we further propose a novel attention-based fusion of manipulation-specific perturbations. Compared to traditional adversarial attacks that optimize noise patterns for each image individually, our generalized model only needs a single forward pass, thus running orders of magnitude faster and allowing for easy integration in image processing stacks, even on resource-constrained devices like smartphones.
pip
The dependencies for defining the environment are provided in requirements.txt
.Please download these models, as they will be required for experiments.
Path | Description |
---|---|
pSp Encoder | pSp trained with the FFHQ dataset for StyleGAN inversion. |
StyleClip | StyleClip trained with the FFHQ dataset for text-manipulation (Afro, Angry, Beyonce, BobCut, BowlCut, Curly Hair, Mohawk, Purple Hair, Surprised, Taylor Swift, Trump, zuckerberg ) |
SimSwap | SinSwap trained for face-swapping |
SAM | SAM model trained for age transformation (used in supp. material). |
StyleGAN-NADA | StyleGAN-Nada models (used in supp. material). |
The code is well-documented and should be easy to follow.
$ git clone
this repo and install the Python dependencies from requirements.txt
. The source code is implemented in PyTorch so familarity with PyTorch is expected.manipulation_tests/
directory. Make sure that these scripts work and you are able to perform inference on these models.configs/paths_config.py
to define the necessary data paths and model paths for training and evaluation.configs/transforms_config.py
for the transforms defined for each dataset/experiment.configs/common_config.py
and change the architecture_type
and dataset_type
according to the experiment you wish to perform.configs/data_configs.py
for the source/target data paths for the train and test sets as well as the transforms.data_configs.py
to define your data paths.transforms_configs.py
to define your own data transforms.configs/attack_configs.py
and change the net_noise
to change the protection model architecture.trainer_scripts
. To train the protection models, depending on the manipulation method execute the following commands# For self-reconstruction/style-mixing task
python -m trainer_scripts.train_protection_model_pSp
# For face-swapping task
python -m trainer_scripts.train_protection_model_simswap
# For textual editing task
python -m trainer_scripts.train_protection_model_styleclip
# For protection against Jpeg Compression
python -m trainer_scripts.train_protection_model_pSp_jpeg
# For combining perturbations from multiple manipulation methods
python -m trainer_scripts.train_protection_model_all_attention
python -m testing_scripts.test_protection_model_pSp -p protection_model.pth
If you find our dataset or paper useful for your research , please include the following citation:
@InProceedings{aneja2022tafim,
author="Aneja, Shivangi and Markhasin, Lev and Nie{\ss}ner, Matthias",
title="TAFIM: Targeted Adversarial Attacks Against Facial Image Manipulations",
booktitle="Computer Vision -- ECCV 2022",
year="2022",
publisher="Springer Nature Switzerland",
address="Cham",
pages="58--75",
isbn="978-3-031-19781-9"
}
Contact Us
If you have questions regarding the dataset or code, please email us at [email protected]. We will get back to you as soon as possible.