State space model for new-generation network alternative to transformers: A survey

X Wang, S Wang, Y Ding, Y Li, W Wu, Y Rong… - arXiv preprint arXiv …, 2024 - arxiv.org
In the post-deep learning era, the Transformer architecture has demonstrated its powerful
performance across pre-trained big models and various downstream tasks. However, the …

Spmamba: State-space model is all you need in speech separation

K Li, G Chen - arXiv preprint arXiv:2404.02063, 2024 - arxiv.org
In speech separation, both CNN-and Transformer-based models have demonstrated robust
separation capabilities, garnering significant attention within the research community …

3dmambaipf: A state space model for iterative point cloud filtering via differentiable rendering

Q Zhou, W Yang, B Fei, J Xu, R Zhang, K Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Noise is an inevitable aspect of point cloud acquisition, necessitating filtering as a
fundamental task within the realm of 3D vision. Existing learning-based filtering methods …

Biomamba: A pre-trained biomedical language representation model leveraging mamba

L Yue, S Xing, Y Lu, T Fu - arXiv preprint arXiv:2408.02600, 2024 - arxiv.org
The advancement of natural language processing (NLP) in biology hinges on models' ability
to interpret intricate biomedical literature. Traditional models often struggle with the complex …

Is Mamba Effective for Time Series Forecasting?

Z Wang, F Kong, S Feng, M Wang, H Zhao… - arXiv preprint arXiv …, 2024 - arxiv.org
In the realm of time series forecasting (TSF), the Transformer has consistently demonstrated
robust performance due to its ability to focus on the global context and effectively capture …

A Comprehensive Survey of Mamba Architectures for Medical Image Analysis: Classification, Segmentation, Restoration and Beyond

S Bansal, S Madisetty, MZU Rehman… - arXiv preprint arXiv …, 2024 - arxiv.org
Mamba, a special case of the State Space Model, is gaining popularity as an alternative to
template-based deep learning approaches in medical image analysis. While transformers …

Venturing into Uncharted Waters: The Navigation Compass from Transformer to Mamba

Y Zou, Y Chen, Z Li, L Zhang, H Zhao - arXiv preprint arXiv:2406.16722, 2024 - arxiv.org
Transformer, a deep neural network architecture, has long dominated the field of natural
language processing and beyond. Nevertheless, the recent introduction of Mamba …

SELD-Mamba: Selective State-Space Model for Sound Event Localization and Detection with Source Distance Estimation

D Mu, Z Zhang, H Yue, Z Wang, J Tang… - arXiv preprint arXiv …, 2024 - arxiv.org
In the Sound Event Localization and Detection (SELD) task, Transformer-based models
have demonstrated impressive capabilities. However, the quadratic complexity of the …

Improving VTE Identification through Language Models from Radiology Reports: A Comparative Study of Mamba, Phi-3 Mini, and BERT

J Deng, Y Wu, Y Yesha, P Nguyen - arXiv preprint arXiv:2408.09043, 2024 - arxiv.org
Venous thromboembolism (VTE) is a critical cardiovascular condition, encompassing deep
vein thrombosis (DVT) and pulmonary embolism (PE). Accurate and timely identification of …