Codes for ID-Specific Video Customized Diffusion
Project Page: Magic-Me
Unlike common text-to-video model (like OpenAI/Sora), this model is for personalized videos using photos of your friends, family, or pets. By training an embedding with these images, it creates custom videos featuring your loved ones, bringing a unique touch to your memories.
News Update: We have deployed our model on Hugging Face's GPU platform, making it available for immediate use. Check it out on .
Ze Ma*, Daquan Zhou* †, Chun-Hsiao Yeh, Xue-She Wang, Xiuyu Li, Huanrui Yang, Zhen Dong †, Kurt Keutzer, Jiashi Feng (*Joint First Author, † Corresponding Author)
We propose a new framework for video generation with customized identity. With a pre-trained ID token, the user would be able to generate any video clips with the specified identity. We propose a series of controllable Video generation and editing methods. The first release includes Customized Diffusion (VCD). It includes three novel components that are essential for high-quality ID preservation: 1) an ID module trained with the cropped identity by prompt-to-segmentation to disentangle the ID information and the background noise for more accurate ID token learning; 2) a text-to-video (T2V) VCD module with 3D Gaussian Noise Prior for better inter-frame consistency and 3) video-to-video (V2V) Face VCD and Tiled VCD modules to deblur the face and upscale the video for higher resolution.
Video Customization Diffusion Model Pipeline
Video Demonstration
ID Specific Video Generation with reference images
Altman | Lecun | Robert | Taylor |
---|---|---|---|
ID Specific Video Editing with reference images
Original Video | Altman | Bengio | Zuck |
---|---|---|---|
More works will be released soon. Stay tuned.
Magic-Me
Magic ID-ditting
Magic-Me Instant
Magic-Me Crowd
First, make sure that Anaconda is installed (refer to official Install Tutorial).
git clone https://github.com/Zhen-Dong/Magic-Me.git
cd Magic-Me
conda env create -f environment.yaml
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
source download_checkpoint.sh
conda activate magic
source train.sh
dataset
..yaml
out of configs/ceo.yaml
.python train.py --config configs/your_config.yaml
configs/ceo.yaml
are saved during training in the directory of outputs/magic-me-ceo-xxxxx/samples.This requires you are subscribed to Google Colab. Users must comply with Google Colab's terms of service and respect copyright laws, ensuring that the resources provided are used responsibly and ethically for academic, educational, or research purposes only.
In the ComfyUI, one click on the "Queue Prompt" to generate the video.
Feel free to change the prompt inside ComfyUI (embedding:firstname man for male, embedding:firstname woman for female) We have provided 24 different character embedding for use:
altman.pt beyonce.pt harry.pt huang.pt johnson.pt lisa.pt musk.pt taylor.pt andrew_ng.pt biden.pt hermione.pt ironman.pt lecun.pt mona.pt obama.pt trump.pt bengio.pt eli.pt hinton.pt jack_chen.pt lifeifei.pt monroe.pt scarlett.pt zuck.pt
The available embeddings are cloned into the directory magic_factory/Magic-ComfyUI/models/embeddings.
Feel free to put your newly trained embeddings, for example boy1.pt, in the same directory, and mention the mebdding as embedding:boy1 man in the ComfyUI prompt.
@misc{ma2024magicme,
title={Magic-Me: Identity-Specific Video Customized Diffusion},
author={Ze Ma and Daquan Zhou and Chun-Hsiao Yeh and Xue-She Wang and Xiuyu Li and Huanrui Yang and Zhen Dong and Kurt Keutzer and Jiashi Feng},
year={2024},
eprint={2402.09368},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Use embedding:firstname man for the male character and embedding:firstname woman for the female.
This project is released for academic use. We disclaim responsibility for user-generated content. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for, users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards.
Codebase built upon Tune-a-Video and AnimateDiff