Multi-label Classification (MLC), which recently has attracted numerous attentions, aims at building classification models for objects assigned with multiple class labels simultaneously. Existing approaches for MLC mainly focus on improving supervised learning which needs a relatively large amount of labeled data for training. In this work, we propose a semi-supervised MLC algorithm to exploit unlabeled data for enhancing the performance. In the training process, our algorithm exploits the specific features per prominent class label chosen by a greedy approach as an extension of LIFT algorithm, and unlabeled data consumption mechanism from TESC. In classification, the 1-Nearest-Neighbor (1NN) is applied to select appropriate class labels for a new data instance. Our experimental results on a data set of hotel (for tourism) reviews indicate that a reasonable amount of unlabeled data helps to increase the F1 score. Interestingly, with a small amount of labeled data, our algorithm can reach comparative performance to a larger amount of labeled data.