FreeU: Free Lunch in Diffusion U-Net

Authors: Chenyang Si, Ziqi Huang, Yuming Jiang, Ziwei Liu

What

This paper introduces FreeU, a method for improving the sample quality of diffusion models during inference by re-weighting the contributions of skip connections and backbone features in the U-Net architecture.

Why

The paper is important because it addresses a critical gap in diffusion model research by focusing on the under-explored potential of the U-Net architecture itself, leading to improved generation quality without requiring additional training or increasing computational costs.

How

The authors conducted experiments using various diffusion models, including Stable Diffusion, DreamBooth, ModelScope, and Rerender, applying FreeU during inference. They analyzed the impact of backbone and skip connection scaling factors on the generated images and videos, comparing them with the baseline models.

Result

The key finding is that FreeU significantly improves the quality of generated images and videos across various tasks, including text-to-image synthesis, text-to-video generation, image editing, and video-to-video translation. Notably, FreeU achieves these enhancements without requiring any additional training or fine-tuning of the models, making it a practical solution for enhancing diffusion model output.

LF

The paper doesn’t explicitly mention limitations, however, potential future work could explore the optimal balancing of backbone and skip connection features for specific tasks. Additionally, investigating the application of FreeU in other diffusion model architectures beyond U-Net would be beneficial.

Abstract

In this paper, we uncover the untapped potential of diffusion U-Net, which serves as a “free lunch” that substantially improves the generation quality on the fly. We initially investigate the key contributions of the U-Net architecture to the denoising process and identify that its main backbone primarily contributes to denoising, whereas its skip connections mainly introduce high-frequency features into the decoder module, causing the network to overlook the backbone semantics. Capitalizing on this discovery, we propose a simple yet effective method-termed “FreeU” - that enhances generation quality without additional training or finetuning. Our key insight is to strategically re-weight the contributions sourced from the U-Net’s skip connections and backbone feature maps, to leverage the strengths of both components of the U-Net architecture. Promising results on image and video generation tasks demonstrate that our FreeU can be readily integrated to existing diffusion models, e.g., Stable Diffusion, DreamBooth, ModelScope, Rerender and ReVersion, to improve the generation quality with only a few lines of code. All you need is to adjust two scaling factors during inference. Project page: https://chenyangsi.top/FreeU/.