A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions
Authors: Jack Urbanek, Florian Bordes, Pietro Astolfi, Mary Williamson, Vasu Sharma, Adriana Romero-Soriano
What
The paper introduces the Densely Captioned Images (DCI) dataset, a collection of 8012 natural images with human-annotated, mask-aligned descriptions averaging over 1000 words each, enabling the evaluation of vision-language models’ understanding of fine-grained image details.
Why
This paper is important because it addresses the limitations of existing vision-language datasets that rely on short, loosely-aligned captions, hindering the development and evaluation of models capable of deep visual-linguistic understanding. The introduction of DCI with its dense and aligned captions provides a valuable resource for benchmarking and advancing vision-language models.
How
The authors first preprocessed images from the SA-1B dataset using the Segment Anything Model (SAM) to extract hierarchical submasks. Then, they employed a multi-stage crowdsourcing approach with qualification tasks and iterative feedback to ensure high-quality annotations. To fit existing model limitations, they used LLaMA2 to generate summarized captions and negatives within CLIP’s token limit, resulting in the summarized DCI (sDCI) dataset. Finally, they evaluated several state-of-the-art VLMs on sDCI using novel benchmark tasks like Subcrop-Caption Matching (SCM) and negatives-based tests.
Result
The results show that existing VLMs, even those trained with negatives or dense captions, struggle to accurately match captions to corresponding subregions within an image, highlighting limitations in fine-grained understanding. Additionally, fine-tuning CLIP on sDCI significantly improved performance on benchmarks like ARO and VL-Checklist, outperforming models trained on significantly larger but loosely-aligned datasets like DAC. These findings underscore the importance of dense and aligned image-text pairs for effective VLM training.
LF
The authors acknowledge limitations in using LLM-generated summaries, which may not capture all the nuances of the full annotations, and the limited text context length of current VLMs. They suggest future work exploring models with larger context windows to leverage the full DCI dataset, and investigating techniques like bitext mining to expand the dataset further.
Abstract
Curation methods for massive vision-language datasets trade off between dataset size and quality. However, even the highest quality of available curated captions are far too short to capture the rich visual detail in an image. To show the value of dense and highly-aligned image-text pairs, we collect the Densely Captioned Images (DCI) dataset, containing 8012 natural images human-annotated with mask-aligned descriptions averaging above 1000 words each. With precise and reliable captions associated with specific parts of an image, we can evaluate vision-language models’ (VLMs) understanding of image content with a novel task that matches each caption with its corresponding subcrop. As current models are often limited to 77 text tokens, we also introduce a summarized version (sDCI) in which each caption length is limited. We show that modern techniques that make progress on standard benchmarks do not correspond with significant improvement on our sDCI based benchmark. Lastly, we finetune CLIP using sDCI and show significant improvements over the baseline despite a small training set. By releasing the first human annotated dense image captioning dataset, we hope to enable the development of new benchmarks or fine-tuning recipes for the next generation of VLMs to come.