LP++: A Surprisingly Strong Linear Probe for Few-Shot CLIP

Authors: Yunshi Huang, Fereshteh Shakeri, Jose Dolz, Malik Boudiaf, Houda Bahig, Ismail Ben Ayed

What

The paper introduces LP++, a novel method for few-shot CLIP adaptation that significantly improves upon the standard linear probe (LP) baseline by incorporating text embeddings via learnable class-wise blending parameters, leading to a surprising improvement in performance.

Why

This paper is important as it challenges the established notion that LP is a weak baseline in few-shot CLIP adaptation. LP++ demonstrates that a simple, efficient, and black-box approach can achieve state-of-the-art results, outperforming more complex methods like prompt learning and adapters while being computationally efficient and not requiring access to internal representations of pre-trained models.

How

The authors propose a block coordinate Majorize-Minimize (MM) descent algorithm for optimizing a cross-entropy objective function, with data-driven learning rates derived from approximate Lipschitz constants, eliminating the need for extensive hyper-parameter search. Furthermore, they leverage insights from convex optimization to derive approximations of the loss function’s minima, leading to data-informed initialization of the variables.

Result

LP++ consistently outperforms the standard LP baseline and achieves competitive performance compared to state-of-the-art few-shot CLIP adaptation methods, particularly in low-shot scenarios. It runs orders of magnitude faster than prompt learning methods and avoids the need for intensive hyper-parameter tuning characteristic of adapter-based approaches. Furthermore, LP++ enables black-box adaptation, making it suitable for real-world, privacy-preserving situations where access to model internals is restricted.

LF

The paper does not explicitly mention limitations or future work. However, potential future work could explore: (1) Applying LP++ to other vision-language tasks beyond image classification. (2) Investigating the impact of different text prompt designs and how to learn them in a data-driven manner. (3) Exploring different block-cycling strategies within the BMM procedure to further improve efficiency. (4) Investigating theoretical guarantees of convergence for LP++ under specific conditions.

Abstract

In a recent, strongly emergent literature on few-shot CLIP adaptation, Linear Probe (LP) has been often reported as a weak baseline. This has motivated intensive research building convoluted prompt learning or feature adaptation strategies. In this work, we propose and examine from convex-optimization perspectives a generalization of the standard LP baseline, in which the linear classifier weights are learnable functions of the text embedding, with class-wise multipliers blending image and text knowledge. As our objective function depends on two types of variables, i.e., the class visual prototypes and the learnable blending parameters, we propose a computationally efficient block coordinate Majorize-Minimize (MM) descent algorithm. In our full-batch MM optimizer, which we coin LP++, step sizes are implicit, unlike standard gradient descent practices where learning rates are intensively searched over validation sets. By examining the mathematical properties of our loss (e.g., Lipschitz gradient continuity), we build majorizing functions yielding data-driven learning rates and derive approximations of the loss’s minima, which provide data-informed initialization of the variables. Our image-language objective function, along with these non-trivial optimization insights and ingredients, yields, surprisingly, highly competitive few-shot CLIP performances. Furthermore, LP++ operates in black-box, relaxes intensive validation searches for the optimization hyper-parameters, and runs orders-of-magnitudes faster than state-of-the-art few-shot CLIP adaptation methods. Our code is available at: \url{https://github.com/FereshteShakeri/FewShot-CLIP-Strong-Baseline.git}.