When Do We Not Need Larger Vision Models?
Authors: Baifeng Shi, Ziyang Wu, Maolin Mao, Xin Wang, Trevor Darrell
What
This paper explores the concept of “Scaling on Scales” (S^2) as a competitive alternative to increasing model size for enhancing visual representation in vision models, demonstrating that smaller models, when applied to multiple image scales, can outperform larger models in tasks like classification, segmentation, and depth estimation.
Why
This paper challenges the prevailing assumption that larger models are always necessary for better visual understanding, proposing a more efficient scaling method that achieves comparable or superior performance with fewer parameters and similar computational cost, which has significant implications for research directions and resource allocation.
How
The authors introduce “S^2-Wrapper,” a parameter-free mechanism extending pre-trained models to multi-scale feature extraction by splitting larger images into smaller sub-images and processing them independently before merging, and then conduct extensive experiments comparing S^2 with model size scaling across various tasks and datasets, including ImageNet, ADE20k, NYUv2, and robotic manipulation.
Result
The key finding is that smaller models with S^2 scaling often match or surpass larger models in performance across various tasks, particularly excelling in dense prediction tasks such as segmentation and depth estimation, and achieving state-of-the-art performance in multimodal LLM visual detail understanding by scaling image resolution to 1008^2.
LF
Limitations include the weaker generalization of smaller models pre-trained on a single scale compared to larger models on hard examples, and future work points towards exploring scale-selective processing for efficiency and enabling parallel processing of a single image for latency-critical scenarios.
Abstract
Scaling up the size of vision models has been the de facto standard to obtain more powerful visual representations. In this work, we discuss the point beyond which larger vision models are not necessary. First, we demonstrate the power of Scaling on Scales (S), whereby a pre-trained and frozen smaller vision model (e.g., ViT-B or ViT-L), run over multiple image scales, can outperform larger models (e.g., ViT-H or ViT-G) on classification, segmentation, depth estimation, Multimodal LLM (MLLM) benchmarks, and robotic manipulation. Notably, S achieves state-of-the-art performance in detailed understanding of MLLM on the V* benchmark, surpassing models such as GPT-4V. We examine the conditions under which S is a preferred scaling approach compared to scaling on model size. While larger models have the advantage of better generalization on hard examples, we show that features of larger vision models can be well approximated by those of multi-scale smaller models. This suggests most, if not all, of the representations learned by current large pre-trained models can also be obtained from multi-scale smaller models. Our results show that a multi-scale smaller model has comparable learning capacity to a larger model, and pre-training smaller models with S can match or even exceed the advantage of larger models. We release a Python package that can apply S on any vision model with one line of code: https://github.com/bfshi/scaling_on_scales.