Demystifying CLIP Data
Authors: Hu Xu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer
What
This paper investigates the data curation process behind CLIP, proposing MetaCLIP, a transparent algorithm that uses metadata and balancing techniques to create high-quality image-text datasets from web sources like CommonCrawl.
Why
The paper is important because it sheds light on the critical role of data curation in the success of CLIP, provides a method to reproduce and potentially outperform CLIP’s dataset, and emphasizes the importance of data transparency in AI.
How
The authors meticulously reconstruct CLIP’s metadata and analyze the sub-string matching and balancing techniques likely employed in CLIP’s data curation. They then propose MetaCLIP, an algorithm that takes a raw data pool and metadata as input and outputs a balanced dataset. They evaluate MetaCLIP by training vision models using their curated data and comparing the performance against models trained on CLIP’s data and other publicly available datasets.
Result
MetaCLIP, trained on a 400M image-text pair dataset curated from CommonCrawl, outperforms CLIP’s proprietary WIT400M dataset on multiple benchmarks, including ImageNet zero-shot classification. Scaling MetaCLIP to 1B and 2.5B data points further improves accuracy, achieving unprecedented results for various ViT model sizes, all within the same training budget as the original CLIP.
LF
The authors acknowledge that their reconstruction of CLIP’s metadata might not be perfectly accurate due to limited information available publicly. They also plan to improve the scalability of their data pipeline for handling even larger datasets. Further research is needed to explore the impact of different metadata sources and balancing strategies.
Abstract
Contrastive Language-Image Pre-training (CLIP) is an approach that has advanced research and applications in computer vision, fueling modern recognition systems and generative models. We believe that the main ingredient to the success of CLIP is its data and not the model architecture or pre-training objective. However, CLIP only provides very limited information about its data and how it has been collected, leading to works that aim to reproduce CLIP’s data by filtering with its model parameters. In this work, we intend to reveal CLIP’s data curation approach and in our pursuit of making it open to the community introduce Metadata-Curated Language-Image Pre-training (MetaCLIP). MetaCLIP takes a raw data pool and metadata (derived from CLIP’s concepts) and yields a balanced subset over the metadata distribution. Our experimental study rigorously isolates the model and training settings, concentrating solely on data. MetaCLIP applied to CommonCrawl with 400M image-text data pairs outperforms CLIP’s data on multiple standard benchmarks. In zero-shot ImageNet classification, MetaCLIP achieves 70.8% accuracy, surpassing CLIP’s 68.3% on ViT-B models. Scaling to 1B data, while maintaining the same training budget, attains 72.4%. Our observations hold across various model sizes, exemplified by ViT-H achieving 80.5%, without any bells-and-whistles. Curation code and training data distribution on metadata is made available at https://github.com/facebookresearch/MetaCLIP.