Official repository accompanying a CVPR 2022 paper EMOCA: Emotion Driven Monocular Face Capture And Animation. EMOCA takes a single image of a face as input and produces a 3D reconstruction. EMOCA sets the new standard on reconstructing highly emotional images in-the-wild
Radek Daněček · Michael J. Black · Timo Bolkart
This repository is the official implementation of the CVPR 2022 paper EMOCA: Emotion-Driven Monocular Face Capture and Animation.
Top row: input images. Middle row: coarse shape reconstruction. Bottom row: reconstruction with detailed displacements.
EMOCA takes a single in-the-wild image as input and reconstructs a 3D face with sufficient facial expression detail to convey the emotional state of the input image. EMOCA advances the state-of-the-art monocular face reconstruction in-the-wild, putting emphasis on accurate capture of emotional content. The official project page is here.
EMOCA v2 is now out. Complete the installation steps below and go to EMOCA to test the demos.
Compared to the original model it produces:
You can find the comparison video here
This is achieved by:
You will have to upgrade to the new environment in order to use EMOCA v2. Please follow the steps bellow to install the package. Then, go to the EMOCA subfolder and follow the steps described there.
While using the new version of this repo is recommended, you can still access the old release here.
The training and testing script for EMOCA can be found in this subfolder:
bash install_38.sh
If this ran without any errors, you now have a functioning conda environment with all the necessary packages to run the demos. If you had issues with the installation script, go through the long version of the installation and see what went wrong. Certain packages (especially for CUDA, PyTorch and PyTorch3D) may cause issues for some users.
bash pull_submodules.sh
conda-environment_py38_cu11_ubuntu.yml
.You can use mamba to create a conda environment (strongly recommended):
mamba env create python=3.8 --file conda-environment_py38_cu11_ubuntu.yml
but you can also use plain conda if you want (but it will be slower):
conda env create python=3.8 --file conda-environment_py38_cu11_ubuntu.yml
In case the specified pytorch version somehow did not install, try again manually:
mamba install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
Note: If you find the environment has a missing then just conda/mamba
- or pip
- install it and please notify me.
conda activate work38_cu11
pip install Cython==0.29.14
gdl
using pip install. I recommend using the -e
option and I have not tested otherwise.pip install -e .
For some people the compilation fails during requirements install and works after. Try running the following separately:
pip install git+https://github.com/facebookresearch/[email protected]
Pytorch3D installation (which is part of the requirements file) can unfortunately be tricky and machine specific. EMOCA was developed with is Pytorch3D 0.6.2 and the previous command includes its installation from source (to ensure its compatibility with pytorch and CUDA). If it fails to compile, you can try to find another way to install Pytorch3D.
Notes:
pip install -U opencv-python
or installing it through other means.
The install script installs opencv-python~=4.5.1.48
installed via pip
.conda activate work38_cu11
For running EMOCA examples, go to EMOCA
For running examples of Emotion Recognition, go to EmotionRecognition
This repo has two subpackages. gdl
and gdl_apps
gdl
is a library full of research code. Some things are OK organized, some things are badly organized. It includes but is not limited to the following:
models
is a module with (larger) deep learning modules (pytorch based)layers
contains individual deep learning layersdatasets
contains base classes and their implementations for various datasets I had to use at some points. It's mostly image-based datasets with various forms of GT if anyutils
- various toolsThe repo is heavily based on PyTorch and Pytorch Lightning.
gdl_apps
contains prototypes that use the GDL library. These can include scripts on how to train, evaluate, test and analyze models from gdl
and/or data for various tasks.
Look for individual READMEs in each sub-projects.
Current projects:
If you use this work in your publication, please cite the following publications:
@inproceedings{EMOCA:CVPR:2021,
title = {{EMOCA}: {E}motion Driven Monocular Face Capture and Animation},
author = {Danecek, Radek and Black, Michael J. and Bolkart, Timo},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
pages = {20311--20322},
year = {2022}
}
As EMOCA builds on top of DECA and uses parts of DECA as fixed part of the model, please further cite:
@article{DECA:Siggraph2021,
title={Learning an Animatable Detailed {3D} Face Model from In-The-Wild Images},
author={Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},
journal = {ACM Transactions on Graphics (ToG), Proc. SIGGRAPH},
volume = {40},
number = {8},
year = {2021},
url = {https://doi.org/10.1145/3450626.3459936}
}
Furthermore, if you use EMOCA v2, please also cite SPECTRE:
@article{filntisis2022visual,
title = {Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos},
author = {Filntisis, Panagiotis P. and Retsinas, George and Paraperas-Papantoniou, Foivos and Katsamanis, Athanasios and Roussos, Anastasios and Maragos, Petros},
journal = {arXiv preprint arXiv:2207.11094},
publisher = {arXiv},
year = {2022},
}
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms of this license.
There are many people who deserve to get credited. These include but are not limited to: Yao Feng and Haiwen Feng and their original implementation of DECA. Antoine Toisoul and colleagues for EmoNet.