Blur Diffusion Save

Official PyTorch implementation of the paper Progressive Deblurring of Diffusion Models for Coarse-to-Fine Image Synthesis.

Project README

blur-diffusion

This is the codebase for our paper Progressive Deblurring of Diffusion Models for Coarse-to-Fine Image Synthesis.

Teaser image

Progressive Deblurring of Diffusion Models for Coarse-to-Fine Image Synthesis
Sangyun Lee1, Hyungjin Chung2, Jaehyeon Kim3, ‪Jong Chul Ye2

1Soongsil University, 2KAIST, 3Kakao Enterprise

Paper: https://arxiv.org/abs/2207.11192

Abstract: Recently, diffusion models have shown remarkable results in image synthesis by gradually removing noise and amplifying signals. Although the simple generative process surprisingly works well, is this the best way to generate image data? For instance, despite the fact that human perception is more sensitive to the low-frequencies of an image, diffusion models themselves do not consider any relative importance of each frequency component. Therefore, to incorporate the inductive bias for image data, we propose a novel generative process that synthesizes images in a coarse-to-fine manner. First, we generalize the standard diffusion models by enabling diffusion in a rotated coordinate system with different velocities for each component of the vector. We further propose a blur diffusion as a special case, where each frequency component of an image is diffused at different speeds. Specifically, the proposed blur diffusion consists of a forward process that blurs an image and adds noise gradually, after which a corresponding reverse process deblurs an image and removes noise progressively. Experiments show that proposed model outperforms the previous method in FID on LSUN bedroom and church datasets.

Train

bash train.sh

Visualization

bash eval_x0hat.sh

Dataset

Image files are required to compute FID during training.

Open Source Agenda is not affiliated with "Blur Diffusion" Project. README Source: sangyun884/blur-diffusion

Open Source Agenda Badge

Open Source Agenda Rating