UniFL: Improve Stable Diffusion via Unified Feedback Learning
Authors: Jiacheng Zhang, Jie Wu, Yuxi Ren, Xin Xia, Huafeng Kuang, Pan Xie, Jiashi Li, Xuefeng Xiao, Weilin Huang, Min Zheng, Lean Fu, Guanbin Li
What
This paper introduces UniFL, a novel unified feedback learning framework for improving text-to-image diffusion models. UniFL aims to address limitations in existing models, such as inferior visual quality, lack of aesthetic appeal, and inefficient inference.
Why
This paper is important because it presents a comprehensive solution to improve text-to-image diffusion models in multiple aspects. By leveraging various feedback learning techniques, UniFL enhances the visual quality, aesthetic appeal, and inference speed of diffusion models, which are crucial for broader applications and user satisfaction.
How
UniFL achieves its goals through three key components: (1) Perceptual Feedback Learning (PeFL) leverages existing visual perception models (e.g., VGG, instance segmentation models) to enhance specific visual aspects like style and structure. (2) Decoupled Feedback Learning utilizes separate reward models for different aesthetic dimensions (e.g., color, layout, lighting, detail) and incorporates an active prompt selection strategy to mitigate overfitting. (3) Adversarial Feedback Learning treats the reward model as a discriminator in adversarial training, enabling optimization for faster inference without sacrificing quality.
Result
UniFL demonstrates superior performance in both quantitative and qualitative evaluations. It outperforms competitive methods like ImageReward, DreamShaper, and DPO in terms of FID, CLIP Score, and aesthetic scores on SD1.5 and SDXL architectures. User studies confirm UniFL’s superiority in generation quality and acceleration, surpassing LCM, SDXL-Turbo, and SDXL-Lightning. Notably, UniFL shows promising generalization capabilities, effectively transferring its improvements to downstream tasks like LoRA, ControlNet, and AnimateDiff.
LF
The authors identify several limitations and future work directions: exploring larger and more advanced visual perception models for enhanced supervision, further improving acceleration towards one-step inference, and streamlining the current two-stage optimization process into a single-stage approach.
Abstract
Diffusion models have revolutionized the field of image generation, leading to the proliferation of high-quality models and diverse downstream applications. However, despite these significant advancements, the current competitive solutions still suffer from several limitations, including inferior visual quality, a lack of aesthetic appeal, and inefficient inference, without a comprehensive solution in sight. To address these challenges, we present UniFL, a unified framework that leverages feedback learning to enhance diffusion models comprehensively. UniFL stands out as a universal, effective, and generalizable solution applicable to various diffusion models, such as SD1.5 and SDXL. Notably, UniFL incorporates three key components: perceptual feedback learning, which enhances visual quality; decoupled feedback learning, which improves aesthetic appeal; and adversarial feedback learning, which optimizes inference speed. In-depth experiments and extensive user studies validate the superior performance of our proposed method in enhancing both the quality of generated models and their acceleration. For instance, UniFL surpasses ImageReward by 17% user preference in terms of generation quality and outperforms LCM and SDXL Turbo by 57% and 20% in 4-step inference. Moreover, we have verified the efficacy of our approach in downstream tasks, including Lora, ControlNet, and AnimateDiff.