EDTalk Save

The official repository of the paper EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis

Project README

EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis

The official repository of the paper EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis

Paper | Project Page | Code

Given an identity source, EDTalk synthesizes talking face videos characterized by mouth shapes, head poses, and expressions consistent with mouth GT, pose source and expression source. These facial dynamics can also be inferred directly from driven audio. Importantly, EDTalk demonstrates superior efficiency in disentanglement training compared to other methods.

TODO

  • Release Arxiv paper.
  • Release code. (Once the paper is accepted)
  • Release Pre-trained Model. (Once the paper is accepted)

Citation

@article{tan2024edtalk,
  title={EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis},
  author={Tan, Shuai and Ji, Bin and Bi, Mengxiao and Pan, Ye},
  journal={arXiv preprint arXiv:2404.01647},
  year={2024}
}

Acknowledgement

Some figures in the paper is inspired by:

The README.md template is borrowed from SyncTalk

Thanks for these great projects.

Open Source Agenda is not affiliated with "EDTalk" Project. README Source: tanshuai0219/EDTalk
Stars
88
Open Issues
3
Last Commit
1 month ago
Repository

Open Source Agenda Badge

Open Source Agenda Rating