Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models
Authors: Xiaoshi Wu, Yiming Hao, Manyuan Zhang, Keqiang Sun, Zhaoyang Huang, Guanglu Song, Yu Liu, Hongsheng Li
What
This paper presents DRTune, a novel algorithm for efficiently fine-tuning text-to-image diffusion models using deep reward supervision, enabling optimization based on various reward functions like image aesthetics and symmetry.
Why
This research is important because it addresses the challenge of optimizing diffusion models with complex reward functions, particularly those requiring deep supervision, which is crucial for controlling global image properties and improving generated image quality.
How
The authors propose DRTune, which employs two key techniques: 1) stopping gradients at denoising network inputs to prevent gradient explosion during back-propagation, and 2) training a strategically sampled subset of denoising steps to improve training efficiency. They compare DRTune with existing reward training methods on a variety of reward functions, including aesthetic scores, CLIPScore, PickScore, symmetry, compressibility, and objectness.
Result
DRTune consistently outperforms baseline methods in optimizing various reward functions, particularly those demanding deep supervision for global image properties like symmetry. Additionally, the authors demonstrate the practical application of DRTune by fine-tuning Stable Diffusion XL 1.0 (SDXL 1.0) with the Human Preference Score v2.1 reward, creating Favorable Diffusion XL 1.0 (FDXL 1.0), which exhibits significantly improved image quality compared to SDXL 1.0 and even achieves comparable quality with Midjourney v5.2.
LF
The authors acknowledge the limitations of reward-based training, specifically the risk of reward hacking, where models might prioritize optimizing the reward function at the expense of overall image quality. They suggest exploring regularization techniques to mitigate this issue. Additionally, they recognize the potential negative social impact of advanced generative models, such as the creation of highly plausible misinformation and the amplification of biases present in the training data. Future work could focus on developing more robust reward functions and exploring methods to mitigate potential biases in training data.
Abstract
Optimizing a text-to-image diffusion model with a given reward function is an important but underexplored research area. In this study, we propose Deep Reward Tuning (DRTune), an algorithm that directly supervises the final output image of a text-to-image diffusion model and back-propagates through the iterative sampling process to the input noise. We find that training earlier steps in the sampling process is crucial for low-level rewards, and deep supervision can be achieved efficiently and effectively by stopping the gradient of the denoising network input. DRTune is extensively evaluated on various reward models. It consistently outperforms other algorithms, particularly for low-level control signals, where all shallow supervision methods fail. Additionally, we fine-tune Stable Diffusion XL 1.0 (SDXL 1.0) model via DRTune to optimize Human Preference Score v2.1, resulting in the Favorable Diffusion XL 1.0 (FDXL 1.0) model. FDXL 1.0 significantly enhances image quality compared to SDXL 1.0 and reaches comparable quality compared with Midjourney v5.2.