Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model

Authors: Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, Xiu Li

What

This paper introduces D3PO, a novel method for directly fine-tuning diffusion models using human feedback without relying on a separate reward model, addressing the limitations of traditional RLHF methods in this domain.

Why

This research is important because it offers a more efficient and cost-effective approach to aligning diffusion models with human preferences, potentially impacting diverse applications like image generation, by eliminating the resource-intensive task of training a separate reward model.

How

The authors reinterpret the denoising process of diffusion models as a multi-step Markov Decision Process (MDP). They then extend the Direct Preference Optimization (DPO) framework, originally designed for Large Language Models, to this MDP. This allows them to directly update the model’s policy based on human preferences, bypassing the need for a reward model.

Result

D3PO demonstrated comparable or superior performance to methods relying on reward models in tasks like image compressibility and aesthetic quality. It also proved effective in challenging scenarios without a reward model, successfully reducing image distortions, enhancing image safety, and improving prompt-image alignment.

LF

The paper acknowledges the limitations stemming from assumptions like the normality of expected return and the use of relative reward sizes. Future work may explore relaxing these assumptions and investigating the effectiveness of D3PO in more complex real-world applications.

Abstract

Using reinforcement learning with human feedback (RLHF) has shown significant promise in fine-tuning diffusion models. Previous methods start by training a reward model that aligns with human preferences, then leverage RL techniques to fine-tune the underlying models. However, crafting an efficient reward model demands extensive datasets, optimal architecture, and manual hyperparameter tuning, making the process both time and cost-intensive. The direct preference optimization (DPO) method, effective in fine-tuning large language models, eliminates the necessity for a reward model. However, the extensive GPU memory requirement of the diffusion model’s denoising process hinders the direct application of the DPO method. To address this issue, we introduce the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) method to directly fine-tune diffusion models. The theoretical analysis demonstrates that although D3PO omits training a reward model, it effectively functions as the optimal reward model trained using human feedback data to guide the learning process. This approach requires no training of a reward model, proving to be more direct, cost-effective, and minimizing computational overhead. In experiments, our method uses the relative scale of objectives as a proxy for human preference, delivering comparable results to methods using ground-truth rewards. Moreover, D3PO demonstrates the ability to reduce image distortion rates and generate safer images, overcoming challenges lacking robust reward models. Our code is publicly available at https://github.com/yk7333/D3PO.