Local image generation using VQGAN-CLIP or CLIP guided diffusion
The Python environment defined in environment.yml
has been updated to enable generating scrolling/zooming images. Although we only need to add opencv, conda's package resolution required updating Pytorch as well to find a compatible version of opencv.
Therefore existing users need to do either of the following:
# Remove the current Python env and recreate
conda env remove -n vqgan-clip-app
conda env create -f environment.yml
# Directly update the Python environment in-place
conda activate vqgan-clip-app
conda env update -f environment.yml --prune
Since starting this repo in Jul 2021 as a personal project, I believe the codebase is now sufficiently feature-complete and stable to call it a 1.0 release.
docs/
Update to 1.0 by running git pull
from your local copy of this repo. No breaking changes are expected, run results from older versions of the codebase should still show up in the gallery viewer.
However, some new packages are needed to support CLIP guided diffusion. You can follow these steps below instead of setting up the Python environment from scratch:
git clone https://github.com/crowsonkb/guided-diffusion
pip install ./guided-diffusion
pip install lpips
download-diffusion-weights.sh
VQGAN-CLIP app and basic gallery viewer implemented.