Aligning Modalities in Vision Large Language Models via Preference Fine-tuning

Authors: Yiyang Zhou, Chenhang Cui, Rafael Rafailov, Chelsea Finn, Huaxiu Yao

What

This paper introduces POVID, a novel approach for aligning image and text modalities in Vision Large Language Models (VLLMs) to mitigate hallucination issues using AI-generated dispreferences for preference tuning.

Why

This paper addresses the significant problem of hallucinations in VLLMs, where the model generates text that doesn’t accurately reflect the image content. This is crucial for deploying VLLMs in real-world applications where accuracy is paramount.

How

The authors propose POVID, a two-stage approach. First, they utilize GPT-4V to create plausible hallucinations in ground-truth image captions and reasoning tasks, generating dispreferred responses. Second, they introduce noise into the input images during training to trigger inherent VLLM hallucination patterns, further improving modality alignment using a modified DPO loss.

Result

POVID significantly outperforms previous VLLM preference tuning methods, achieving a 31.78% improvement on hallucination benchmarks and consistent gains on comprehensive VLLM benchmarks. It effectively reduces hallucinations and shows superior performance in image captioning and detailed description tasks.

LF

The paper doesn’t explicitly mention limitations. Future work could explore different noise injection techniques, expand to other VLLM architectures, and investigate the generalization of POVID to other multimodal tasks beyond image captioning and reasoning.

Abstract

Instruction-following Vision Large Language Models (VLLMs) have achieved significant progress recently on a variety of tasks. These approaches merge strong pre-trained vision models and large language models (LLMs). Since these components are trained separately, the learned representations need to be aligned with joint training on additional image-language pairs. This procedure is not perfect and can cause the model to hallucinate - provide answers that do not accurately reflect the image, even when the core LLM is highly factual and the vision backbone has sufficiently complete representations. In this work, we frame the hallucination problem as an alignment issue, tackle it with preference tuning. Specifically, we propose POVID to generate feedback data with AI models. We use ground-truth instructions as the preferred response and a two-stage approach to generate dispreferred data. First, we prompt GPT-4V to inject plausible hallucinations into the correct answer. Second, we distort the image to trigger the inherent hallucination behavior of the VLLM. This is an automated approach, which does not rely on human data generation or require a perfect expert, which makes it easily scalable. Finally, both of these generation strategies are integrated into an RLHF pipeline via Direct Preference Optimization. In experiments across broad benchmarks, we show that we can not only reduce hallucinations, but improve model performance across standard benchmarks, outperforming prior approaches. Our data and code are available at https://github.com/YiyangZhou/POVID.