作者
Yeguang Qin, Fengxiao Tang, Ming Zhao, Yusen Zhu
发表日期
2022/11/22
图书
International Conference on Neural Information Processing
页码范围
142-152
出版商
Springer Nature Singapore
简介
Recent, the single image super-resolution (SISR) methods are primarily based on building more profound and more complex convolutional neural networks (CNN), which leads to colossal computation overhead. At the same time, some people introduce Transformer to low-level visual tasks, which achieves high performance but also with a high computational cost. To address this problem, we propose an attention-based feature fusion super-resolution network (AFFSRN) to alleviate the network complexity and achieve higher performance. The detail capture capability of CNN makes its global modeling capability weak, we propose the Swin Transformer block (STB) instead of convolution operation for global feature modeling. Based on STB, we further propose the self-attention feature distillation block (SFDB) for efficient feature extraction. Furthermore, to increase the depth of the network with a small computational …
学术搜索中的文章
Y Qin, F Tang, M Zhao, Y Zhu - International Conference on Neural Information …, 2022