Head-free lightweight semantic segmentation with linear transformer

B Dong, P Wang, F Wang - Proceedings of the AAAI Conference on …, 2023 - ojs.aaai.org
B Dong, P Wang, F Wang
Proceedings of the AAAI Conference on Artificial Intelligence, 2023ojs.aaai.org
Existing semantic segmentation works have been mainly focused on designing effective
decoders; however, the computational load introduced by the overall structure has long
been ignored, which hinders their applications on resource-constrained hardwares. In this
paper, we propose a head-free lightweight architecture specifically for semantic
segmentation, named Adaptive Frequency Transformer (AFFormer). AFFormer adopts a
parallel architecture to leverage prototype representations as specific learnable local …
Abstract
Existing semantic segmentation works have been mainly focused on designing effective decoders; however, the computational load introduced by the overall structure has long been ignored, which hinders their applications on resource-constrained hardwares. In this paper, we propose a head-free lightweight architecture specifically for semantic segmentation, named Adaptive Frequency Transformer (AFFormer). AFFormer adopts a parallel architecture to leverage prototype representations as specific learnable local descriptions which replaces the decoder and preserves the rich image semantics on high-resolution features. Although removing the decoder compresses most of the computation, the accuracy of the parallel structure is still limited by low computational resources. Therefore, we employ heterogeneous operators (CNN and vision Transformer) for pixel embedding and prototype representations to further save computational costs. Moreover, it is very difficult to linearize the complexity of the vision Transformer from the perspective of spatial domain. Due to the fact that semantic segmentation is very sensitive to frequency information, we construct a lightweight prototype learning block with adaptive frequency filter of complexity O (n) to replace standard self attention with O (n^ 2). Extensive experiments on widely adopted datasets demonstrate that AFFormer achieves superior accuracy while retaining only 3M parameters. On the ADE20K dataset, AFFormer achieves 41.8 mIoU and 4.6 GFLOPs, which is 4.4 mIoU higher than Segformer, with 45% less GFLOPs. On the Cityscapes dataset, AFFormer achieves 78.7 mIoU and 34.4 GFLOPs, which is 2.5 mIoU higher than Segformer with 72.5% less GFLOPs. Code is available at https://github. com/dongbo811/AFFormer.
ojs.aaai.org
以上显示的是最相近的搜索结果。 查看全部搜索结果