Generative Multimodal Models are In-Context Learners

Authors: Quan Sun, Yufeng Cui, Xiaosong Zhang, Fan Zhang, Qiying Yu, Zhengxiong Luo, Yueze Wang, Yongming Rao, Jingjing Liu, Tiejun Huang, Xinlong Wang

What

The paper introduces Emu++, a 37B parameter generative multimodal model trained on a massive dataset of text and image-text pairs, demonstrating strong in-context learning capabilities in multimodal tasks.

Why

This work is important as it presents a significant step towards building adaptable and general-purpose multimodal systems capable of solving diverse tasks with minimal task-specific training.

How

The authors trained Emu++ using a unified autoregressive objective to predict the next multimodal element (visual embedding or text token) in a sequence, leveraging a large-scale dataset of text, image-text pairs, and interleaved image-text-video data. They further enhance the model for instruction following and controllable visual generation through instruction tuning on dedicated datasets.

Result

Emu++ achieves state-of-the-art performance on various multimodal benchmarks, including visual question answering, image captioning, and text-to-image generation. It exhibits strong few-shot learning capabilities, improving with more in-context examples. The model also demonstrates emergent abilities like visual prompting and object-grounded generation.

LF

The authors acknowledge limitations regarding potential biases in training data and the possibility of generating harmful content. Future work includes enhancing robustness, reducing hallucinations, improving fairness, and addressing the performance gap with closed multimodal systems in complex reasoning tasks.

Abstract

The human ability to easily solve multimodal tasks in context (i.e., with only a few demonstrations or simple instructions), is what current multimodal systems have largely struggled to imitate. In this work, we demonstrate that the task-agnostic in-context learning capabilities of large multimodal models can be significantly enhanced by effective scaling-up. We introduce Emu2, a generative multimodal model with 37 billion parameters, trained on large-scale multimodal sequences with a unified autoregressive objective. Emu2 exhibits strong multimodal in-context learning abilities, even emerging to solve tasks that require on-the-fly reasoning, such as visual prompting and object-grounded generation. The model sets a new record on multiple multimodal understanding tasks in few-shot settings. When instruction-tuned to follow specific instructions, Emu2 further achieves new state-of-the-art on challenging tasks such as question answering benchmarks for large multimodal models and open-ended subject-driven generation. These achievements demonstrate that Emu2 can serve as a base model and general-purpose interface for a wide range of multimodal tasks. Code and models are publicly available to facilitate future research.