Comparison of base classifiers for multi-label learning

EKY Yapp, X Li, WF Lu, PS Tan - Neurocomputing, 2020 - Elsevier
Neurocomputing, 2020Elsevier
Multi-label learning methods can be categorised into algorithm adaptation, problem
transformation and ensemble methods. Some of these methods depend on a base classifier
and the relationship is not well understood. In this paper the sensitivity of five problem
transformation and two ensemble methods to four types of classifiers is studied. Their
performance across 11 benchmark datasets is measured using 16 evaluation metrics. The
best classifier is shown to depend on the method: Support Vector Machines (SVM) for binary …
Abstract
Multi-label learning methods can be categorised into algorithm adaptation, problem transformation and ensemble methods. Some of these methods depend on a base classifier and the relationship is not well understood. In this paper the sensitivity of five problem transformation and two ensemble methods to four types of classifiers is studied. Their performance across 11 benchmark datasets is measured using 16 evaluation metrics. The best classifier is shown to depend on the method: Support Vector Machines (SVM) for binary relevance, classifier chains, calibrated label ranking, quick weighted multi-label learning and RAndom k-labELsets; k-Nearest Neighbours (k-NN) and Naïve Bayes (NB) for Hierarchy Of Multilabel classifiERs; and Decision Trees (DT) for ensemble of classifier chains. The statistical performance of a classifier is also found to be generally consistent across the metrics for any given method. Overall, DT and SVM have the best performance–computational time trade-off followed by k-NN and NB.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果