Dense CNN with self-attention for time-domain speech enhancement

A Pandey, DL Wang - IEEE/ACM transactions on audio, speech …, 2021 - ieeexplore.ieee.org
… In this work, we propose a dense convolutional network (DCN) with self-attention for speech
… sequence Y , a self-attention layer can be implemented by using a linear layer to compute Q…

Self-attention fully convolutional densenets for automatic salt segmentation

OM Saad, W Chen, F Zhang, L Yang… - … on Neural Networks …, 2022 - ieeexplore.ieee.org
… We propose to use fully convolutional DenseNets [26] with a self-attention mechanism to …
the four convolutional layers is 3×3. We use an extra dense block with six layers, 96 feature …

Beyond self-attention: External attention using two linear layers for visual tasks

MH Guo, ZN Liu, TJ Mu, SM Hu - IEEE Transactions on Pattern …, 2022 - ieeexplore.ieee.org
… Unlike self-attention which obtains an attention map by computing affinities between self
queries and self keys, our external attention computes the relation between self queries and a …

Denssiam: End-to-end densely-siamese network with self-attention model for object tracking

MH Abdelpakey, MS Shehata… - Advances in Visual …, 2018 - Springer
… , which uses the concept of dense layers and connects each dense layer to all layers in a
feed-… also includes a Self-Attention mechanism to force the network to pay more attention to the …

Synthesizer: Rethinking self-attention for transformer models

Y Tay, D Bahri, D Metzler, DC Juan… - International …, 2021 - proceedings.mlr.press
self-attention module and directly synthesizes the alignment matrix instead. For simplicity,
we describe the per head and per layerDense and V to denote vanilla dot product attention. …

[HTML][HTML] Violencenet: Dense multi-head self-attention with bidirectional convolutional lstm for detecting violence

FJ Rendón-Segador, JA Álvarez-García, F Enríquez… - Electronics, 2021 - mdpi.com
… Multi-head self-attention is a layer that essentially applies multiple self-attention mechanisms
… We use the multi-head self-attention layer in combination with the recurrent convolutional …

Voice and accompaniment separation in music using self-attention convolutional neural network

Y Liu, B Thoshkahna, A Milani… - arXiv preprint arXiv …, 2020 - arxiv.org
… insert such self-attention subnets into different levels of Dense-UNet, … dense blocks and
before downsampling/upsampling layers. We do not add any self-attention subnets for first dense

Self-attention capsule networks for object classification

A Hoogi, B Wilcox, Y Gupta, DL Rubin - arXiv preprint arXiv:1904.12483, 2019 - arxiv.org
… It is similar to the ResNet-18 architecture with the difference being significantly densely
connected feature maps in the final layer of a dense block instead of a residual block. …

DCGSA: A global self-attention network with dilated convolution for crowd density map generating

L Zhu, C Li, B Wang, K Yuan, Z Yang - Neurocomputing, 2020 - Elsevier
… by global average pooling of deep layers can guide shallow layers to learning person
localization details. Thus, we design Global Self-Attention Module in order to provide global …

Transforensics: image forgery localization with dense self-attention

J Hao, Z Zhang, S Yang, D Xie… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
… Third, a transformer encoder has 6 encoder layers, and each one consists of a multi-head
self-attention module and a feed forward network (FFN). The dimension of feed forward is …