Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
🔥 This PR introduced ability to make music videos that interpolate to the beat of the song!!
Full Changelog: https://github.com/nateraw/stable-diffusion-videos/compare/v0.4.0...v0.5.0
Full Changelog: https://github.com/nateraw/stable-diffusion-videos/compare/v0.3.0...v0.4.0
stable_diffusion_videos==0.3.0
🚀You can now resume unfinished runs instead of starting over. Just pass resume=True
and we'll resume the run at <output_dir>/<name>
.
from stable_diffusion_videos import walk
video_path = walk(
output_dir='dreams',
name='my_unfinished_run',
resume=True # All you need to do!
)
Thank you to @codefaux for adding this feature.
We've been generating frames one at a time. It's a lot faster to do more than one instead. Play with the batch_size
kwarg until you go out of memory, then reduce the value by 1. For me, I am able to do batch_size=4
on a V100 16GB, which generates images ~20% faster.
from stable_diffusion_videos import walk
walk(
prompts=['a cat', 'a dog'],
seeds=[42, 123],
batch_size=4, # All you need to do!
num_steps=60,
make_video=True
)
frame_filename_ext
to control saving/resuming from .png or .jpg by @nateraw in https://github.com/nateraw/stable-diffusion-videos/pull/47
Full Changelog: https://github.com/nateraw/stable-diffusion-videos/compare/v0.2.0...v0.3.0
You can now do 4x upsampling (thanks to Real-ESRGAN) to make your results even more awesome!
Full Changelog: https://github.com/nateraw/stable-diffusion-videos/compare/v0.1.2...v0.2.0
Full Changelog: https://github.com/nateraw/stable-diffusion-videos/compare/v0.1.1...v0.1.2
Try release again
add pypi package