ORPO: Monolithic Preference Optimization without Reference Model

Authors: Jiwoo Hong, Noah Lee, James Thorne

What

This paper investigates the crucial role of supervised fine-tuning (SFT) in preference alignment for language models and introduces ORPO, a novel monolithic odds ratio preference optimization algorithm that eliminates the need for a separate preference alignment phase.

Why

This work is significant as it simplifies preference alignment, improves efficiency, and enhances performance compared to existing multi-stage methods like RLHF and DPO. It sheds light on the understudied role of SFT in preference alignment and offers a more streamlined approach.

How

The authors conduct experiments fine-tuning various language models (OPT, Phi-2, Llama-2, Mistral) using ORPO on preference datasets like HH-RLHF and UltraFeedback. They compare ORPO’s performance with SFT, RLHF, and DPO across various model sizes and evaluate instruction-following abilities using AlpacaEval and MT-Bench.

Result

Key findings include that a minor penalty for disfavored generation styles during SFT is sufficient for preference alignment. ORPO outperforms SFT, RLHF, and DPO in reward model win rates and achieves state-of-the-art results on AlpacaEval and MT-Bench, exceeding even larger language models.

LF

Limitations include the need for comparison with a wider range of preference alignment algorithms and scaling beyond 7B models. Future work involves exploring diverse datasets, analyzing ORPO’s impact on pre-trained models, and expanding to other NLP tasks.

Abstract

While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO- (7B) and Mistral-ORPO- (7B).