HumanTOMATO Save

[ICML-2024] 🍅HumanTOMATO: Text-aligned Whole-body Motion Generation

Project README

HumanTOMATO: Text-aligned Whole-body Motion Generation

Shunlin Lu🍅 2, 3, Ling-Hao Chen🍅 1, 2, Ailing Zeng2, Jing Lin1, 2, Ruimao Zhang3, Lei Zhang2, and Heung-Yeung Shum1, 2

🍅Co-first author. Listing order is random.

1Tsinghua University, 2International Digital Economy Academy (IDEA), 3School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-SZ)

🤩 Abstract

This work targets a novel text-driven whole-body motion generation task, which takes a given textual description as input and aims at generating high-quality, diverse, and coherent facial expressions, hand gestures, and body motions simultaneously. Previous works on text-driven motion generation tasks mainly have two limitations: they ignore the key role of fine-grained hand and face controlling in vivid whole-body motion generation, and lack a good alignment between text and motion. To address such limitations, we propose a Text-aligned whOle-body Motion generATiOn framework, named HumanTOMATO, which is the first attempt to our knowledge towards applicable holistic motion generation in this research area. To tackle this challenging task, our solution includes two key designs: (1) a Holistic Hierarchical VQ-VAE (aka H²VQ) and a Hierarchical-GPT for fine-grained body and hand motion reconstruction and generation with two structured codebooks; and (2) a pre-trained text-motion-alignment model to help generated motion align with the input textual description explicitly. Comprehensive experiments verify that our model has significant advantages in both the quality of generated motions and their alignment with text.

Codes will be released step by step in following months!

📢 News

  • [2023/11/15] Publish HumanTOMATO Motion Representation (tomato representation) processing code.
  • [2023/10/22] Publish project!

🎬 Highlight Whole-body Motions

The proposed HumanTOMATO model can generate text-aligned whole-body motions with vivid and harmonious face, hand, and body motion. We show two generated qualitative results.

🔍 System Overview

The framework overview of the proposed text-driven whole-body motion generation. (a) Holistic Hierarchical Vector Quantization (H²VQ) to compress fine-grained body-hand motion into two discrete codebooks with hierarchical structure relations. (b) Hierarchical-GPT using motion-aware textual embedding as the input to hierarchically generate body-hand motions. (c) Facial text-conditional VAE (cVAE) to generate the corresponding facial motions. The outputs of body, hand, and face motions comprise a vivid and text-aligned whole-body motion.

🚀 Quick Start

🚅 Model Training

📸 Visualization

🤝🏼 Citation

If you find the code is useful in your research, please cite us:

@article{humantomato,
  title={HumanTOMATO: Text-aligned Whole-body Motion Generation},
  author={Lu, Shunlin and Chen, Ling-Hao and Zeng, Ailing and Lin, Jing and Zhang, Ruimao and Zhang, Lei and Shum, Heung-Yeung},
  journal={arxiv:2310.12978},
  year={2023}
}

📚 License

This code is distributed under an IDEA LICENSE. Note that our code depends on other libraries and datasets which each have their own respective licenses that must also be followed.

💋 Acknowledgement

The code is on the basis of TMR, MLD, T2M-GPT, and HumanML3D. Thanks to all contributors!

🌟 Star History

Star History Chart

If you have any question, please contact at: shunlinlu0803 [AT] gmail [DOT] com AND thu [DOT] lhchen [AT] gmail [DOT] com.

Open Source Agenda is not affiliated with "HumanTOMATO" Project. README Source: IDEA-Research/HumanTOMATO
Stars
195
Open Issues
10
Last Commit
5 months ago

Open Source Agenda Badge

Open Source Agenda Rating