Prompt Switch: Efficient CLIP Adaptation for Text-Video Retrieval

Authors: Chaorui Deng, Qi Chen, Pengda Qin, Da Chen, Qi Wu

What

This paper proposes Prompt Switch, an efficient method for adapting the CLIP model for text-video retrieval by introducing a Prompt Cube mechanism to enhance the learning of global and detailed video semantics, achieving state-of-the-art performance while maintaining high efficiency.

Why

This paper addresses the efficiency bottleneck in existing CLIP-based text-video retrieval methods that rely on computationally expensive cross-modal fusion. It proposes a novel approach to enhance video representation learning within the CLIP framework, enabling efficient and effective retrieval by decoupling video and text modalities during inference.

How

The authors introduce a Prompt Cube, a 3D tensor integrated into the CLIP image encoder. This cube undergoes a Prompt Switch operation, transposing its spatial and temporal dimensions before each self-attention layer to capture global video semantics. Additionally, an auxiliary video captioning objective is employed during training to enhance the learning of detailed video semantics. Finally, a simple mean pooling strategy is used on the enhanced frame representations to obtain the video representation.

Result

The proposed Prompt Switch method achieves state-of-the-art performance on three benchmark datasets (MSR-VTT, MSVD, LSMDC) for text-video retrieval, outperforming previous methods, especially under the text-agnostic temporal fusion setting. It demonstrates a significant improvement in efficiency compared to methods relying on cross-modal temporal fusion, making it more suitable for large-scale retrieval systems.

LF

The authors acknowledge that their captioning module is relatively simple and might benefit from more advanced architectures. For future work, they suggest exploring other pre-training tasks or incorporating external knowledge to further enhance the model’s performance.

Abstract

In text-video retrieval, recent works have benefited from the powerful learning capabilities of pre-trained text-image foundation models (e.g., CLIP) by adapting them to the video domain. A critical problem for them is how to effectively capture the rich semantics inside the video using the image encoder of CLIP. To tackle this, state-of-the-art methods adopt complex cross-modal modeling techniques to fuse the text information into video frame representations, which, however, incurs severe efficiency issues in large-scale retrieval systems as the video representations must be recomputed online for every text query. In this paper, we discard this problematic cross-modal fusion process and aim to learn semantically-enhanced representations purely from the video, so that the video representations can be computed offline and reused for different texts. Concretely, we first introduce a spatial-temporal “Prompt Cube” into the CLIP image encoder and iteratively switch it within the encoder layers to efficiently incorporate the global video semantics into frame representations. We then propose to apply an auxiliary video captioning objective to train the frame representations, which facilitates the learning of detailed video semantics by providing fine-grained guidance in the semantic space. With a naive temporal fusion strategy (i.e., mean-pooling) on the enhanced frame representations, we obtain state-of-the-art performances on three benchmark datasets, i.e., MSR-VTT, MSVD, and LSMDC.