No Token Left Behind: Efficient Vision Transformer via Dynamic Token Idling
Authors: Xuwei Xu, Changlin Li, Yudong Chen, Xiaojun Chang, Jiajun Liu, Sen Wang
What
This paper introduces IdleViT, a novel approach for enhancing the efficiency of Vision Transformers (ViTs) by dynamically idling tokens during inference.
Why
This paper is important because it addresses the computational cost of ViTs, especially for resource-constrained applications, by dynamically selecting informative tokens and idling others, leading to improved inference speed without significant accuracy degradation.
How
The authors propose IdleViT, which leverages a lightweight prediction head to identify and idle less informative tokens at each layer. This is done by training the model with a keep ratio, controlling the number of active tokens. They evaluate IdleViT on ImageNet using DeiT and LV-ViT architectures and compare it to other efficient ViT models.
Result
IdleViT achieves significant speed improvements (up to 52%) compared to the full models with minimal accuracy loss (less than 0.3%). It outperforms other efficient ViT and convolutional models on the trade-off between accuracy and computational complexity.
LF
Limitations are not explicitly mentioned in the provided text. However, possible future work could involve exploring different prediction head architectures or investigating the generalization of IdleViT to other downstream tasks beyond image classification.
Abstract
Vision Transformers (ViTs) have demonstrated outstanding performance in computer vision tasks, yet their high computational complexity prevents their deployment in computing resource-constrained environments. Various token pruning techniques have been introduced to alleviate the high computational burden of ViTs by dynamically dropping image tokens. However, some undesirable pruning at early stages may result in permanent loss of image information in subsequent layers, consequently hindering model performance. To address this problem, we propose IdleViT, a dynamic token-idle-based method that achieves an excellent trade-off between performance and efficiency. Specifically, in each layer, IdleViT selects a subset of the image tokens to participate in computations while keeping the rest of the tokens idle and directly passing them to this layer’s output. By allowing the idle tokens to be re-selected in the following layers, IdleViT mitigates the negative impact of improper pruning in the early stages. Furthermore, inspired by the normalized graph cut, we devise a token cut loss on the attention map as regularization to improve IdleViT’s token selection ability. Our method is simple yet effective and can be extended to pyramid ViTs since no token is completely dropped. Extensive experimental results on various ViT architectures have shown that IdleViT can diminish the complexity of pretrained ViTs by up to 33% with no more than 0.2% accuracy decrease on ImageNet, after finetuning for only 30 epochs. Notably, when the keep ratio is 0.5, IdleViT outperforms the state-of-the-art EViT on DeiT-S by 0.5% higher accuracy and even faster inference speed. The source code is available in the supplementary material.