Can MLLMs Perform Text-to-Image In-Context Learning?
Authors: Yuchen Zeng, Wonjun Kang, Yicong Chen, Hyung Il Koo, Kangwook Lee
What
This paper introduces the concept of Text-to-Image In-Context Learning (T2I-ICL), where Multimodal Large Language Models (MLLMs) generate images based on textual prompts and example image-text pairs, and presents CoBSAT, a new benchmark dataset to evaluate MLLMs’ performance on T2I-ICL tasks.
Why
This paper addresses the under-explored area of T2I-ICL, contrasting it with the more common Image-to-Text ICL, and provides a benchmark for evaluating and understanding the capabilities of MLLMs in this domain, which is crucial for applications like product design and personalized content creation.
How
The authors created CoBSAT, a dataset with 10 tasks covering five themes (color, background, style, action, texture), each with object-inference and attribute-inference variations. They evaluated six state-of-the-art MLLMs on this dataset using CLIP and LLaVA as evaluation metrics to assess the accuracy of generated images or image descriptions against true labels.
Result
The study found that existing MLLMs struggle with T2I-ICL, with SEED-LLaMA performing best in image generation and Gemini, Qwen-VL, and GPT-4V excelling in generating image descriptions. The paper also identifies multimodality and image generation as key challenges in T2I-ICL. Notably, fine-tuning models on CoBSAT and incorporating Chain-of-Thought prompting led to significant performance improvements.
LF
The paper acknowledges limitations in demonstration selection and the need to explore additional prompt engineering techniques like Tree-of-Thought and self-consistency sampling. Future work includes expanding CoBSAT with more themes and attributes, focusing on image editing tasks, and developing multimodal prompt engineering techniques.
Abstract
The evolution from Large Language Models (LLMs) to Multimodal Large Language Models (MLLMs) has spurred research into extending In-Context Learning (ICL) to its multimodal counterpart. Existing such studies have primarily concentrated on image-to-text ICL. However, the Text-to-Image ICL (T2I-ICL), with its unique characteristics and potential applications, remains underexplored. To address this gap, we formally define the task of T2I-ICL and present CoBSAT, the first T2I-ICL benchmark dataset, encompassing ten tasks. Utilizing our dataset to benchmark six state-of-the-art MLLMs, we uncover considerable difficulties MLLMs encounter in solving T2I-ICL. We identify the primary challenges as the inherent complexity of multimodality and image generation, and show that strategies such as fine-tuning and Chain-of-Thought prompting help to mitigate these difficulties, leading to notable improvements in performance. Our code and dataset are available at https://github.com/UW-Madison-Lee-Lab/CoBSAT.