Designing a lightweight and robust portrait segmentation algorithm is an important task for a wide range of face applications. However, the problem has been considered as a subset of the object segmentation and less handled in this field. Obviously, portrait segmentation has its unique requirements. First, because the portrait segmentation is performed in the middle of a whole process, it requires extremely lightweight models. Second, there has not been any public datasets in this domain that contain a sufficient number of images. To solve the first problem, we introduce the new extremely lightweight portrait segmentation model SINet, containing an information blocking decoder and spatial squeeze modules. The information blocking decoder uses confidence estimation to recover local spatial information without spoiling global consistency. The spatial squeeze module uses multiple receptive fields to cope with various sizes of consistency. To tackle the second problem, we propose a simple method to create additional portrait segmentation data, which can improve accuracy. In our qualitative and quantitative analysis on the EG1800 dataset, we show that our method outperforms various existing lightweight models. Our method reduces the number of parameters from 2: 1M to 86: 9K (around 95.9% reduction), while maintaining the accuracy under an 1% margin from the state-of-the-art method. We also show our model is successfully executed on a real mobile device with 100.6 FPS. In addition, we demonstrate that our method can be used for general semantic segmentation on the Cityscapes dataset. The code and dataset are available in https://github. com/HYOJINPARK/ExtPortraitSeg.