Is combining classifiers with stacking better than selecting the best one?

S Džeroski, B Ženko - Machine learning, 2004 - Springer
We empirically evaluate several state-of-the-art methods for constructing ensembles of
heterogeneous classifiers with stacking and show that they perform (at best) comparably to …

Generating ensembles of heterogeneous classifiers using stacked generalization

MP Sesmero, AI Ledezma… - … reviews: data mining and …, 2015 - Wiley Online Library
Over the last two decades, the machine learning and related communities have conducted
numerous studies to improve the performance of a single classifier by combining several …

Troika–an improved stacking schema for classification tasks

E Menahem, L Rokach, Y Elovici - Information Sciences, 2009 - Elsevier
Stacking is a general ensemble method in which a number of base classifiers are combined
using one meta-classifier which learns their outputs. Such an approach provides certain …

Effective voting of heterogeneous classifiers

G Tsoumakas, I Katakis, I Vlahavas - European conference on machine …, 2004 - Springer
This paper deals with the combination of classification models that have been derived from
running different (heterogeneous) learning algorithms on the same data set. We focus on the …

A probabilistic classifier ensemble weighting scheme based on cross-validated accuracy estimates

J Large, J Lines, A Bagnall - Data mining and knowledge discovery, 2019 - Springer
Our hypothesis is that building ensembles of small sets of strong classifiers constructed with
different learning algorithms is, on average, the best approach to classification for real-world …

Ensemble methods

ZH Zhou - Combining pattern classifiers. Wiley, Hoboken, 2014 - api.taylorfrancis.com
Ensemble methods that train multiple learners and then combine them for use, with Boosting
and Bagging as representatives, are a kind of state-of-theart learning approach. It is well …

Selective fusion of heterogeneous classifiers

G Tsoumakas, L Angelis, I Vlahavas - Intelligent Data Analysis, 2005 - content.iospress.com
There are two main paradigms in combining different classification algorithms: Classifier
Selection and Classifier Fusion. The first one selects a single model for classifying a new …

A weighted voting framework for classifiers ensembles

LI Kuncheva, JJ Rodríguez - Knowledge and information systems, 2014 - Springer
We propose a probabilistic framework for classifier combination, which gives rigorous
optimality conditions (minimum classification error) for four combination methods: majority …

Popular ensemble methods: An empirical study

D Opitz, R Maclin - Journal of artificial intelligence research, 1999 - jair.org
An ensemble consists of a set of individually trained classifiers (such as neural networks or
decision trees) whose predictions are combined when classifying novel instances. Previous …

Error correlation and error reduction in ensemble classifiers

K Tumer, J Ghosh - Connection science, 1996 - Taylor & Francis
Using an ensemble of classifiers, instead of a single classifier, can lead to improved
generalization. The gains obtained by combining, however, are often affected more by the …