Voice2Mesh Save

CVPR 2022: Cross-Modal Perceptionist: Can Face Geometry be Gleaned from Voices?

Project README

Cross-Modal Perceptionist

Code Repository for CVPR 2022 "Cross-Modal Perceptionist: Can Face Geometry be Gleaned from Voices?"

Cho-Ying Wu, Chin-Cheng Hsu, Ulrich Neumann, University of Southern California

[Paper] [Project page] [Voxceleb-3D Data]

Check the project page for the introduction of this cool work!

Update: 2022/12/01 Added Evaluation code, pretained model, and execution script for supervised framework. Organized data structure of Voxceleb-3D

Voxceleb-3D:

(1) [Here] contains data with names starting from F-Z as the training set. 100G zipped file, ~250G after unzip. This set contains pointcloud (.xyz), reconstructed mesh overlapped on images from VGGFace (_b.jpg), and 199-dim 3DMM parameters using BFM Face 2009 basis. This is in contrast to simplified 3DMM basis for 40-dim shape and 10-dim expression. You can donwload full basis from BFM-2009 official website. There are multiple 3D faces for an identity.

(2) [Here] contains data with names starting from A-E as the validation set. 300M. The format is the same except there is only one 3D face for an identity as groundtruth.

(3) [Here] contains images from VGGFace we used to reconstruct 3D faces for (1) and (2)

(4) [Here] contains preprocessed voice data (MFCC features) from Voxceleb for all the identities. 38G zipped file. Refer to this [meta file] to map id to name.

(5) [Here] contains preprocessed voice data (MFCC features) from Voxceleb for the testing subset (A-E). You can download it for inference purpose. See later section.

We study the cross-modal learning and analyze the correlation between voices and 3D face geometry. Unlike previous methods for studying this correlation between voices and faces and only work on the 2D domain, we choose 3D representation that can better validate the supportive evidence from the physiology of the correlation between voices and skeletal and articulator structures, which potentially affect facial geometry.

Comparison of recovered 3D face meshes with the baseline.

Consistency for the same identity using different utterances.

Demo: Preprocessed fbank

We test on Ubuntu 16.04 LTS, NVIDIA 2080 Ti (only GPU is supported), and use anaconda for installing packages

Install packages

  1. conda create --name CMP python=3.8

  2. Install PyTorch compatible to your computer, we test on PyTorch v1.9 (should be compatible with other 1.0+ versions)

  3. install other dependency: opencv-python, scipy, PIL, Cython, pyaudio

    Or use the environment.yml we provide instead:

    • conda env create -f environment.yml
    • conda activate CMP
  4. Build the rendering toolkit (by c++ and cython) for overlapping 3D meshes on images with configurations

    cd Sim3DR
    bash build_sim3dr.sh
    cd ..
    

Download pretrained models and 3DMM configuration data

  1. Download from [here] (~160M) and unzip under the root folder. This will create 'pretrained_models' (trained by unsupervised CMP) and 'train.configs' (3DMM config data) under the root folder.

Read the preprocessed fbank for inference

  1. python demo.py (This will fetch the preprocessed MFCC and use them as network inputs)
  2. Results will be generated under data/results/ (pre-generated references are under data/results_reference)

More preprocessed MFCC and 3D mesh (3DMM params) pairs can be downloaded: [Voxceleb-3D Data] (about 100G).

Demo: :laughing: Try it! Use device mic input

  1. Do the above 1-5 step. Plus, download the face type meshes and extract under ./face_types

  2. python demo_mic.py The demo will take 5 seconds recording from your device and predict the face mesh.

We perform unsupervised gender classfication based on mean male and female shape and calculate the statistics between the predicted face and mean shape. Also we calculate the distance between the four types of faces (Regular, Slim, Skinny, Wide)and indicate which type the voice is closer to.

  1. Results will be generated under data/results

Inference from supervised framework

  1. Do the 1-5 step in Demo. Download pretrained supervised model [here]. Download voice data (A-E) for inference [here], [meta file], and [groundtruth]. Put the pretrained model under './pretrained_models/supervised_64'. Put the vocie data and meta file under './data'. Put the groundtruth under './data' and extract.

  2. Edit config.py Line 6: change to 'pretrained_models/supervised_64'

  3. python eval_sup.py
    

This will match identity from voiceID and available 3D faces reconstructed from VGGFace via the meta file. Only predict 3D faces for those matched ID. Then it will save all the mesh obj files under './data/supervised_output/'

Evaluation

  1. Do the 1-5 step in Demo. Download generated and saved mesh for validation set (name starting from A-E in Voxceleb-3D). From supervised CMP: https://drive.google.com/file/d/1_xobyRM-abjfrvzjbF7uwMVPFPfeKZC9/view?usp=share_link;

(The same as groundtruth in the supervised inference) Voxceleb-3D validation set: https://drive.google.com/file/d/1NdkqlCPhl-mvPU9TYlPgHE_FaNJjAysf/view?usp=share_link. Put them under './data' and extract.

The valiation set for each identity contains image (.jpg), mesh (.obj), pointcloud (.xyz), image overlapped with mesh (_b.jpg), 3DMM parameters (.npy) (199-dim for shape and 29-dim for expression. This is in contrast to simplified 3DMM basis for 40-dim shape and 10-dim expression. You can donwload full basis from BFM-2009 official website. Otherwise, we provided already reconstructed mesh in obj extension)

  1. bash cal_size.sh
    

This will run and report ARE metrics and keypoint error metrics.

Training

  1. Train the unsupervised framework

-- Download 'Voxceleb-3D' data (2), (3), and (4). They are validation set, training images, and training voice banks. Extract and put them under './data'

-- Download a much smaller set [here] for fast online validation

-- python gan_train_cascade.py

Citation

If you find our work useful, please consider to cite us.

@inproceedings{wu2022cross,
title={Cross-Modal Perceptionist: Can Face Geometry be Gleaned from Voices?},
author={Wu, Cho-Ying and Hsu, Chin-Cheng and Neumann, Ulrich},
booktitle={CVPR},
year={2022}
}

This project is developed on [SynergyNet], [3DDFA-V2] and [reconstruction-faces-from-voice]

Open Source Agenda is not affiliated with "Voice2Mesh" Project. README Source: choyingw/Cross-Modal-Perceptionist

Open Source Agenda Badge

Open Source Agenda Rating