A General Theoretical Paradigm to Understand Learning from Human Preferences

Authors: Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal Valko, Rémi Munos

What

This paper presents a theoretical framework, called Ψ-preference optimization (ΨΠΟ), for learning from human preferences, unifying existing methods like RLHF and DPO and highlighting potential pitfalls.

Why

The paper addresses the lack of theoretical understanding of current preference learning methods, despite their practical success, particularly in aligning large language models with human preferences.

How

The authors introduce ΨPO as a general objective function, analyze specific cases like RLHF and DPO, identify potential overfitting issues, and propose a simplified variant, Identity-ΠΟ (IΠΟ), with a computationally efficient algorithm.

Result

The paper shows that ΨPO generalizes RLHF and DPO, both vulnerable to overfitting due to their reliance on the Bradley-Terry model. The proposed IΠΟ method, using the identity mapping in ΨPO, avoids overfitting by directly optimizing regularized total preferences. Experiments on illustrative bandit examples demonstrate IΠΟ’s improved stability and adherence to the reference policy compared to DPO.

LF

While the paper provides a theoretical analysis and illustrative examples, future work should focus on scaling up IΠΟ to more complex scenarios, such as training large language models on human preference data, to assess its real-world effectiveness.

Abstract

The prevalent deployment of learning from human preferences through reinforcement learning (RLHF) relies on two important approximations: the first assumes that pairwise preferences can be substituted with pointwise rewards. The second assumes that a reward model trained on these pointwise rewards can generalize from collected data to out-of-distribution data sampled by the policy. Recently, Direct Preference Optimisation (DPO) has been proposed as an approach that bypasses the second approximation and learn directly a policy from collected data without the reward modelling stage. However, this method still heavily relies on the first approximation. In this paper we try to gain a deeper theoretical understanding of these practical algorithms. In particular we derive a new general objective called PO for learning from human preferences that is expressed in terms of pairwise preferences and therefore bypasses both approximations. This new general objective allows us to perform an in-depth analysis of the behavior of RLHF and DPO (as special cases of PO) and to identify their potential pitfalls. We then consider another special case for PO by setting simply to Identity, for which we can derive an efficient optimisation procedure, prove performance guarantees and demonstrate its empirical superiority to DPO on some illustrative examples.