Distilling Diffusion Models into Conditional GANs
Authors: Minguk Kang, Richard Zhang, Connelly Barnes, Sylvain Paris, Suha Kwak, Jaesik Park, Eli Shechtman, Jun-Yan Zhu, Taesung Park
What
This paper introduces Diffusion2GAN, a method for distilling complex multi-step diffusion models into single-step conditional GANs, accelerating inference while preserving image quality by interpreting the process as a paired image-to-image translation task.
Why
This paper is important because it addresses the slow inference speed of diffusion models, a major limitation hindering their real-time application in areas like text-to-image synthesis, 3D modeling, and video generation. By enabling one-step generation without significant quality loss, it paves the way for more practical and interactive applications of these powerful models.
How
The authors formulate diffusion distillation as a paired image-to-image translation problem, utilizing noise-to-image pairs from the diffusion model’s ODE trajectory. They introduce E-LatentLPIPS, an efficient perceptual loss operating directly in the diffusion model’s latent space, for effective regression. A multi-scale conditional discriminator with text alignment loss is also employed for enhanced performance.
Result
Diffusion2GAN outperforms state-of-the-art one-step diffusion distillation models (DMD, SDXL-Turbo, SDXL-Lightning) on zero-shot COCO benchmarks. E-LatentLPIPS demonstrates superior efficiency compared to traditional LPIPS, enabling larger batch sizes. The method’s effectiveness is shown by distilling both Stable Diffusion 1.5 and the larger SDXL model, achieving impressive FID and CLIP scores.
LF
The paper acknowledges limitations in handling varying classifier-free guidance scales and the performance dependency on the teacher model. Future work could explore guided distillation techniques for CFG flexibility and leveraging real image-text pairs for surpassing teacher model limitations. Additionally, further investigation is needed to address the diversity drop observed when scaling up models.
Abstract
We propose a method to distill a complex multistep diffusion model into a single-step conditional GAN student model, dramatically accelerating inference, while preserving image quality. Our approach interprets diffusion distillation as a paired image-to-image translation task, using noise-to-image pairs of the diffusion model’s ODE trajectory. For efficient regression loss computation, we propose E-LatentLPIPS, a perceptual loss operating directly in diffusion model’s latent space, utilizing an ensemble of augmentations. Furthermore, we adapt a diffusion model to construct a multi-scale discriminator with a text alignment loss to build an effective conditional GAN-based formulation. E-LatentLPIPS converges more efficiently than many existing distillation methods, even accounting for dataset construction costs. We demonstrate that our one-step generator outperforms cutting-edge one-step diffusion distillation models - DMD, SDXL-Turbo, and SDXL-Lightning - on the zero-shot COCO benchmark.