Vision Mamba: A Comprehensive Survey and Taxonomy

Authors: Xiao Liu, Chenxu Zhang, Lei Zhang

What

This paper presents a comprehensive survey of Mamba, a novel deep learning architecture based on state space models (SSMs), and its applications in various computer vision tasks.

Why

This survey is important because it provides a timely and comprehensive overview of Mamba, which is rapidly gaining traction in the computer vision community as a more efficient alternative to Transformers and CNNs, particularly for processing long sequences and high-resolution images.

How

The authors conduct their research by reviewing existing literature on Mamba and categorizing its variants based on their application in different vision tasks, including general vision, multi-modal learning, and vertical domains like remote sensing and medical image analysis.

Result

The paper highlights the successful implementation of Mamba across a wide spectrum of vision tasks, showcasing its superior performance in terms of efficiency, accuracy, and memory usage compared to traditional architectures. Key results include state-of-the-art performance achieved by Mamba variants in image classification, object detection, semantic segmentation, image restoration, 3D vision, and multi-modal tasks.

LF

The authors identify several limitations and future research directions for Mamba, including the need for new scanning mechanisms to better handle the non-causal nature of visual data, the exploration of synergistic hybrid architectures combining Mamba with other approaches like Transformers, the development of large-scale Mamba models, and its integration with other methodologies such as diffusion models and domain generalization.

Abstract

State Space Model (SSM) is a mathematical model used to describe and analyze the behavior of dynamic systems. This model has witnessed numerous applications in several fields, including control theory, signal processing, economics and machine learning. In the field of deep learning, state space models are used to process sequence data, such as time series analysis, natural language processing (NLP) and video understanding. By mapping sequence data to state space, long-term dependencies in the data can be better captured. In particular, modern SSMs have shown strong representational capabilities in NLP, especially in long sequence modeling, while maintaining linear time complexity. Notably, based on the latest state-space models, Mamba merges time-varying parameters into SSMs and formulates a hardware-aware algorithm for efficient training and inference. Given its impressive efficiency and strong long-range dependency modeling capability, Mamba is expected to become a new AI architecture that may outperform Transformer. Recently, a number of works have attempted to study the potential of Mamba in various fields, such as general vision, multi-modal, medical image analysis and remote sensing image analysis, by extending Mamba from natural language domain to visual domain. To fully understand Mamba in the visual domain, we conduct a comprehensive survey and present a taxonomy study. This survey focuses on Mamba’s application to a variety of visual tasks and data types, and discusses its predecessors, recent advances and far-reaching impact on a wide range of domains. Since Mamba is now on an upward trend, please actively notice us if you have new findings, and new progress on Mamba will be included in this survey in a timely manner and updated on the Mamba project at https://github.com/lx6c78/Vision-Mamba-A-Comprehensive-Survey-and-Taxonomy.