OHTA Save

[CVPR 2024] OHTA: One-shot Hand Avatar via Data-driven Implicit Priors

Project README

OHTA: One-shot Hand Avatar via Data-driven Implicit Priors

PICO, ByteDance
*Equal contribution   Corresponding author
:star_struck: Accepted to CVPR 2024

OHTA is a novel approach capable of creating implicit animatable hand avatars using just a single image. It facilitates 1) text-to-avatar conversion, 2) hand texture and geometry editing, and 3) interpolation and sampling within the latent space.


YouTube

:mega: Updates

[02/2024] :partying_face: OHTA is accepted to CVPR 2024! Working on code release!

:love_you_gesture: Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{
  zheng2024ohta,
  title={OHTA: One-shot Hand Avatar via Data-driven Implicit Priors},
  author={Zheng, Xiaozheng and Wen, Chao and Zhuo, Su and Xu, Zeran and Li, Zhaohu and Zhao, Yang and Xue, Zhou},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

:newspaper_roll: License

Distributed under the MIT License. See LICENSE for more information.

Open Source Agenda is not affiliated with "OHTA" Project. README Source: zxz267/OHTA

Open Source Agenda Badge

Open Source Agenda Rating