Sinet: Extreme lightweight portrait segmentation networks with spatial squeeze module and information blocking decoder

H Park, L Sjosund, YJ Yoo, N Monet… - Proceedings of the …, 2020 - openaccess.thecvf.com
Proceedings of the IEEE/CVF Winter Conference on Applications …, 2020openaccess.thecvf.com
Designing a lightweight and robust portrait segmentation algorithm is an important task for a
wide range of face applications. However, the problem has been considered as a subset of
the object segmentation and less handled in this field. Obviously, portrait segmentation has
its unique requirements. First, because the portrait segmentation is performed in the middle
of a whole process, it requires extremely lightweight models. Second, there has not been
any public datasets in this domain that contain a sufficient number of images. To solve the …
Abstract
Designing a lightweight and robust portrait segmentation algorithm is an important task for a wide range of face applications. However, the problem has been considered as a subset of the object segmentation and less handled in this field. Obviously, portrait segmentation has its unique requirements. First, because the portrait segmentation is performed in the middle of a whole process, it requires extremely lightweight models. Second, there has not been any public datasets in this domain that contain a sufficient number of images. To solve the first problem, we introduce the new extremely lightweight portrait segmentation model SINet, containing an information blocking decoder and spatial squeeze modules. The information blocking decoder uses confidence estimation to recover local spatial information without spoiling global consistency. The spatial squeeze module uses multiple receptive fields to cope with various sizes of consistency. To tackle the second problem, we propose a simple method to create additional portrait segmentation data, which can improve accuracy. In our qualitative and quantitative analysis on the EG1800 dataset, we show that our method outperforms various existing lightweight models. Our method reduces the number of parameters from 2: 1M to 86: 9K (around 95.9% reduction), while maintaining the accuracy under an 1% margin from the state-of-the-art method. We also show our model is successfully executed on a real mobile device with 100.6 FPS. In addition, we demonstrate that our method can be used for general semantic segmentation on the Cityscapes dataset. The code and dataset are available in https://github. com/HYOJINPARK/ExtPortraitSeg.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果