RFAConv: Innovating spatial attention and standard convolutional operation

X Zhang, C Liu, D Yang, T Song, Y Ye, K Li… - arXiv preprint arXiv …, 2023 - arxiv.org
Spatial attention has been widely used to improve the performance of convolutional neural
networks. However, it has certain limitations. In this paper, we propose a new perspective on …

Squeeze-and-excitation networks

J Hu, L Shen, G Sun - … of the IEEE conference on computer …, 2018 - openaccess.thecvf.com
Convolutional neural networks are built upon the convolution operation, which extracts
informative features by fusing spatial and channel-wise information together within local …

Spatial channel attention for deep convolutional neural networks

T Liu, R Luo, L Xu, D Feng, L Cao, S Liu, J Guo - Mathematics, 2022 - mdpi.com
Recently, the attention mechanism combining spatial and channel information has been
widely used in various deep convolutional neural networks (CNNs), proving its great …

Gather-excite: Exploiting feature context in convolutional neural networks

J Hu, L Shen, S Albanie, G Sun… - Advances in neural …, 2018 - proceedings.neurips.cc
While the use of bottom-up local operators in convolutional neural networks (CNNs)
matches well some of the statistics of natural images, it may also prevent such models from …

Shift: A zero flop, zero parameter alternative to spatial convolutions

B Wu, A Wan, X Yue, P Jin, S Zhao… - Proceedings of the …, 2018 - openaccess.thecvf.com
Neural networks rely on convolutions to aggregate spatial information. However, spatial
convolutions are expensive in terms of model size and computation, both of which grow …

Simam: A simple, parameter-free attention module for convolutional neural networks

L Yang, RY Zhang, L Li, X Xie - International conference on …, 2021 - proceedings.mlr.press
In this paper, we propose a conceptually simple but very effective attention module for
Convolutional Neural Networks (ConvNets). In contrast to existing channel-wise and spatial …

Global attention mechanism: Retain information to enhance channel-spatial interactions

Y Liu, Z Shao, N Hoffmann - arXiv preprint arXiv:2112.05561, 2021 - arxiv.org
A variety of attention mechanisms have been studied to improve the performance of various
computer vision tasks. However, the prior methods overlooked the significance of retaining …

A Spatial-Channel Feature-Enriched Module Based On Multi-Context Statistics Attention

H Tao, Q Duan - IEEE Internet of Things Journal, 2023 - ieeexplore.ieee.org
Convolutional neural networks (CNNs) have demonstrated remarkable performance in
various computer vision tasks, such as image classification, semantic segmentation, and …

On the connection between local attention and dynamic depth-wise convolution

Q Han, Z Fan, Q Dai, L Sun, MM Cheng, J Liu… - arXiv preprint arXiv …, 2021 - arxiv.org
Vision Transformer (ViT) attains state-of-the-art performance in visual recognition, and the
variant, Local Vision Transformer, makes further improvements. The major component in …

Spatial pyramid attention for deep convolutional neural networks

X Ma, J Guo, A Sansom, M McGuire… - IEEE Transactions …, 2021 - ieeexplore.ieee.org
Attention mechanisms have shown great success in computer vision. However, the
commonly used global average pooling in some implementations aggregates a three …