作者
Reza Azad, René Arimond, Ehsan Khodapanah Aghdam, Amirhosein Kazerouni, Dorit Merhof
发表日期
2022/12/27
期刊
MICCAI 2023 workshop
简介
Transformers have recently gained attention in the computer vision domain due to their ability to model long-range dependencies. However, the self-attention mechanism, which is the core part of the Transformer model, usually suffers from quadratic computational complexity with respect to the number of tokens. Many architectures attempt to reduce model complexity by limiting the self-attention mechanism to local regions or by redesigning the tokenization process. In this paper, we propose DAE-Former, a novel method that seeks to provide an alternative perspective by efficiently designing the self-attention mechanism. More specifically, we reformulate the self-attention mechanism to capture both spatial and channel relations across the whole feature dimension while staying computationally efficient. Furthermore, we redesign the skip connection path by including the cross-attention module to ensure the feature …
引用总数
学术搜索中的文章
R Azad, R Arimond, EK Aghdam, A Kazerouni… - International Workshop on PRedictive Intelligence In …, 2023