Iterated Learning Improves Compositionality in Large Vision-Language Models
Authors: Chenhao Zheng, Jieyu Zhang, Aniruddha Kembhavi, Ranjay Krishna
What
This paper proposes a novel iterated learning algorithm for vision-language models to improve their compositionality by drawing inspiration from the cultural transmission theory in cognitive science, where languages evolve to be more compositional over generations.
Why
Despite the advancement in large vision-language models, existing models struggle with compositional understanding, limiting their ability to generalize and reason about novel situations. This paper addresses this issue with a novel training paradigm inspired by human language development, potentially paving the way for more robust and interpretable vision-language models.
How
The authors reframe vision-language contrastive learning as a Lewis Signaling Game between a vision agent and a language agent. They introduce a shared codebook as the basis for the representation of both agents, and periodically reset the language agent’s weights, mimicking cultural transmission across generations. This forces the vision agent to learn representations that are easier to learn by new language agents, thus improving compositionality.
Result
The proposed iterated learning algorithm demonstrably improves compositionality on several benchmarks, including SugarCrepe and CREPE, outperforming baseline models like standard CLIP and NegCLIP. Importantly, this improvement doesn’t come at the cost of recognition capability, as shown by comparable performance on zero-shot image classification tasks. Further analysis suggests that iterated learning leads to smoother, easier-to-learn visual representations and a more interpretable codebook.
LF
The paper acknowledges the potential instability during training due to the randomness introduced by resetting agent weights. Future work could focus on stabilizing the learning process and exploring extensions to other domains beyond vision and language.
Abstract
A fundamental characteristic common to both human vision and natural language is their compositional nature. Yet, despite the performance gains contributed by large vision and language pretraining, recent investigations find that most-if not all-our state-of-the-art vision-language models struggle at compositionality. They are unable to distinguish between images of ” a girl in white facing a man in black” and “a girl in black facing a man in white”. Moreover, prior work suggests that compositionality doesn’t arise with scale: larger model sizes or training data don’t help. This paper develops a new iterated training algorithm that incentivizes compositionality. We draw on decades of cognitive science research that identifies cultural transmission-the need to teach a new generation-as a necessary inductive prior that incentivizes humans to develop compositional languages. Specifically, we reframe vision-language contrastive learning as the Lewis Signaling Game between a vision agent and a language agent, and operationalize cultural transmission by iteratively resetting one of the agent’s weights during training. After every iteration, this training paradigm induces representations that become “easier to learn”, a property of compositional languages: e.g. our model trained on CC3M and CC12M improves standard CLIP by 4.7%, 4.0% respectfully in the SugarCrepe benchmark.