Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

Authors: Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, Quanquan Gu

What

This paper proposes a new fine-tuning method called Self-Play fine-tuning (SPIN) for Large Language Models (LLMs) that leverages a self-play mechanism to improve a model’s performance without requiring additional human-annotated data.

Why

This paper is important because it offers a way to enhance LLM performance without the need for expensive and time-consuming data annotation beyond the initial fine-tuning dataset. It provides a theoretical analysis of the method’s convergence and demonstrates its empirical effectiveness on various benchmark datasets.

How

The authors propose a self-play mechanism where an LLM acts as both the main player and the opponent. The main player is trained to distinguish between responses generated by the opponent (an older version of the LLM) and human-annotated data. This iterative process refines the LLM’s ability to generate responses aligned with the target data distribution.

Result

The paper shows SPIN significantly improves LLM performance on benchmarks like HuggingFace Open LLM Leaderboard and MT-Bench. Notably, SPIN outperforms methods like Direct Preference Optimization (DPO), which requires additional preference data, and achieves comparable results even at iteration 0. The paper also demonstrates the importance of iterative training and analyzes the impact of training data size.

LF

The paper acknowledges a limitation in that the fixed target data distribution, derived from humans, limits the potential performance. Future work could explore dynamically changing target distributions to push LLM capabilities beyond human-level. Additionally, the authors suggest exploring methods to reduce the volume of synthetic data needed for training.

Abstract

Harnessing the power of human-annotated data through Supervised Fine-Tuning (SFT) is pivotal for advancing Large Language Models (LLMs). In this paper, we delve into the prospect of growing a strong LLM out of a weak one without the need for acquiring additional human-annotated data. We propose a new fine-tuning method called Self-Play fIne-tuNing (SPIN), which starts from a supervised fine-tuned model. At the heart of SPIN lies a self-play mechanism, where the LLM refines its capability by playing against instances of itself. More specifically, the LLM generates its own training data from its previous iterations, refining its policy by discerning these self-generated responses from those obtained from human-annotated data. Our method progressively elevates the LLM from a nascent model to a formidable one, unlocking the full potential of human-annotated demonstration data for SFT. Theoretically, we prove that the global optimum to the training objective function of our method is achieved only when the LLM policy aligns with the target data distribution. Empirically, we evaluate our method on several benchmark datasets including the HuggingFace Open LLM Leaderboard, MT-Bench, and datasets from Big-Bench. Our results show that SPIN can significantly improve the LLM’s performance across a variety of benchmarks and even outperform models trained through direct preference optimization (DPO) supplemented with extra GPT-4 preference data. This sheds light on the promise of self-play, enabling the achievement of human-level performance in LLMs without the need for expert opponents. Codes are available at https://github.com/uclaml/SPIN.