InterDiff Save

[ICCV 2023] Official PyTorch implementation of the paper "InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion"

Project README

InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion

Sirui XuZhengyuan LiYu-Xiong Wang*Liang-Yan Gui*
University of Illinois Urbana-Champaign
ICCV 2023

🏠 About

This paper addresses a novel task of anticipating 3D human-object interactions (HOIs). Most existing research on HOI synthesis lacks comprehensive whole-body interactions with dynamic objects, e.g., often limited to manipulating small or static objects. Our task is significantly more challenging, as it requires modeling dynamic objects with various shapes, capturing whole-body motion, and ensuring physically valid interactions. To this end, we propose InterDiff, a framework comprising two key steps: (i) interaction diffusion, where we leverage a diffusion model to encode the distribution of future human-object interactions; (ii) interaction correction, where we introduce a physics-informed predictor to correct denoised HOIs in a diffusion step. Our key insight is to inject prior knowledge that the interactions under reference with respect to contact points follow a simple pattern and are easily predictable. Experiments on multiple human-object interaction datasets demonstrate the effectiveness of our method for this task, capable of producing realistic, vivid, and remarkably long-term 3D HOI predictions.

📖 Implementation

To create the environment, you can check and build according to the requirement file requirements.txt, which is based on Python 3.7.

[!NOTE] For specific packages such as psbody-mesh and human-body-prior, you may need to build from their sources.

You may also build from a detailed requirement file based on Python 3.8, which might contain redundancies,

conda env create -f environment.yml

For more information about the implementation, see interdiff/README.md.

📹 Demo

🔥 News

  • [2023-10-27] Release training and evaluation codes, as well as our checkpoints. Let's play with it!
  • [2023-09-16] Release a demo video 📹.
  • [2023-09-01] Our paper is available on the Arxiv 🎉 Code/Models are coming soon. Please stay tuned! ☕️

📝 TODO List

  • Release more demos.
  • Data preparation.
  • Release training and evaluation (short-term) codes.
  • Release checkpoints.
  • Release evaluation (long-term) and optimization codes.
  • Release code for visualization.

🔍 Overview

💡 Key Insight

We present HOI sequences (left), object motions (middle), and objects relative to the contacts after coordinate transformations (right). Our key insight is to inject coordinate transformations into a diffusion model, as the relative motion shows simpler patterns that are easier to predict, e.g., being almost stationary (top), or rotating around a fixed axis (bottom).

🔗 Citation

If you find our work helpful, please cite:

@inproceedings{
   xu2023interdiff,
   title={{InterDiff}: Generating 3D Human-Object Interactions with Physics-Informed Diffusion},
   author={Xu, Sirui and Li, Zhengyuan and Wang, Yu-Xiong and Gui, Liang-Yan},
   booktitle={ICCV},
   year={2023},
}

👏 Acknowledgements

  • BEHAVE: We use the BEHAVE dataset for the mesh-based interaction.
  • HO-GCN: We use its presented dataset for the skeleton-based interaction.
  • TEMOS: We adopt the rendering code for HOI visualization.
  • MDM: We use the MDM in our work.
  • STARS: We use the STARS in our work.

📚 License

This code is distributed under an MIT LICENSE.

Note that our code depends on other libraries, including SMPL, SMPL-X, PyTorch3D, Hugging Face, Hydra, and uses datasets which each have their own respective licenses that must also be followed.

🌟 Star History

Star History Chart

Open Source Agenda is not affiliated with "InterDiff" Project. README Source: Sirui-Xu/InterDiff

Open Source Agenda Badge

Open Source Agenda Rating