RL for Consistency Models: Faster Reward Guided Text-to-Image Generation

Authors: Owen Oertell, Jonathan D. Chang, Yiyi Zhang, Kianté Brantley, Wen Sun

What

This paper introduces RLCM, a novel framework for enhancing text-to-image consistency models by leveraging reinforcement learning to optimize for specific reward functions, resulting in faster training and inference compared to diffusion models.

Why

The paper addresses limitations in text-to-image generation using diffusion models, such as difficulty in aligning with specific prompts and slow inference speed. It leverages consistency models, which offer faster generation, and proposes an RL-based approach to fine-tune them for better alignment with downstream tasks.

How

The authors formulate the iterative inference of a consistency model as a Markov Decision Process (MDP) with a shorter horizon compared to diffusion models. They utilize a policy gradient algorithm, RLCM, to optimize the consistency model’s policy by maximizing rewards associated with desired image properties. Experiments compare RLCM to DDPO (an RL method for diffusion models) on tasks like image compressibility, aesthetics, and prompt alignment.

Result

RLCM demonstrates faster training and inference than DDPO while achieving comparable or better image quality across various tasks. Notably, RLCM shows a 17x speedup in training time on the aesthetic task. Ablation studies highlight the trade-off between inference time and image quality achievable by adjusting the number of inference steps in RLCM.

LF

The authors acknowledge limitations such as the use of sparse rewards in the current policy gradient method and suggest exploring dense reward strategies. Future work could also focus on developing loss functions that reinforce consistency, potentially further improving inference speed.

Abstract

Reinforcement learning (RL) has improved guided image generation with diffusion models by directly optimizing rewards that capture image quality, aesthetics, and instruction following capabilities. However, the resulting generative policies inherit the same iterative sampling process of diffusion models that causes slow generation. To overcome this limitation, consistency models proposed learning a new class of generative models that directly map noise to data, resulting in a model that can generate an image in as few as one sampling iteration. In this work, to optimize text-to-image generative models for task specific rewards and enable fast training and inference, we propose a framework for fine-tuning consistency models via RL. Our framework, called Reinforcement Learning for Consistency Model (RLCM), frames the iterative inference process of a consistency model as an RL procedure. RLCM improves upon RL fine-tuned diffusion models on text-to-image generation capabilities and trades computation during inference time for sample quality. Experimentally, we show that RLCM can adapt text-to-image consistency models to objectives that are challenging to express with prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Comparing to RL finetuned diffusion models, RLCM trains significantly faster, improves the quality of the generation measured under the reward objectives, and speeds up the inference procedure by generating high quality images with as few as two inference steps. Our code is available at https://rlcm.owenoertell.com