Code of [ECCV 2022] "AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture"
Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu
To address the ill-posed problem caused by partial observations in monocular human volumetric capture, we present AvatarCap, a framework that introduces animatable avatars into the capture pipeline for high-fidelity reconstruction in both visible and invisible regions.
Using this repo, you can either create an animatable avatar from several 3D scans of one character or reconstruct him/her using the avatar as a prior from a monocular video.
./smpl_files
../pretrained_models
. The contents of this folder are listed below:./pretrained_models
├── avatar_net
│ └── example # the avatar network of the character in the example dataset
│ └── example_finetune_tex # the avatar network with more high-quality texture
├── recon_net # reconstruction network which is general to arbitrary subjects
├── normal_net # normal estimation network used in data preprocessing
EXAMPLE_DATA_DIR
.training_data_dir
in configs/example.yaml
as EXAMPLE_DATA_DIR/training
.python main.py -c ./configs/example.yaml -m train
./results/example/training
.testing_data_dir
in configs/example.yaml
as EXAMPLE_DATA_DIR/testing
.python main.py -c ./configs/example.yaml -m test
./results/example/testing
.Check DATA.md for processing your own data.
Some codes are based on PIFuHD, pix2pixHD, SCANimate, POP and Animatable NeRF. We thank the authors for their great work!
MIT License. SMPL-related files are subject to the license of SMPL.
If you find our code, data or paper is useful to your research, please consider citing:
@InProceedings{li2022avatarcap,
title={AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture},
author={Li, Zhe and Zheng, Zerong and Zhang, Hongwen and Ji, Chaonan and Liu, Yebin},
booktitle={European Conference on Computer Vision (ECCV)},
month={October},
year={2022},
}