Extended methods to handle classification biases

E Beauxis-Aussalet, L Hardman - 2017 ieee international …, 2017 - ieeexplore.ieee.org
Classifiers can provide counts of items per class, but systematic classification errors yield
biases (eg, if a class is often misclassified as another, its size may be under-estimated). To …

Detecting fractures in classifier performance

DA Cieslak, NV Chawla - Seventh IEEE International …, 2007 - ieeexplore.ieee.org
A fundamental tenet assumed by many classification algorithms is the presumption that both
training and testing samples are drawn from the same distribution of data-this is the …

Using balancing terms to avoid discrimination in classification

SA Enni, I Assent - 2018 IEEE International Conference on …, 2018 - ieeexplore.ieee.org
From personalized ad delivery and healthcare to criminal sentencing, more decisions are
made with help from methods developed in the fields of data mining and machine learning …

Building projectable classifiers of arbitrary complexity

TK Ho, EM Kleinberg - Proceedings of 13th International …, 1996 - ieeexplore.ieee.org
Conventional methods for classifier design often suffer from having two conflicting goals-to
develop arbitrarily complex decision boundaries to suit a given problem, and at the same …

Consequences of variability in classifier performance estimates

T Raeder, TR Hoens, NV Chawla - 2010 IEEE International …, 2010 - ieeexplore.ieee.org
The prevailing approach to evaluating classifiers in the machine learning community
involves comparing the performance of several algorithms over a series of usually unrelated …

Model stability: a key factor in determining whether an algorithm produces an optimal model from a matching distribution

KM Ting, RJY Quek - Third IEEE International Conference on …, 2003 - ieeexplore.ieee.org
We investigate the factors leading to producing suboptimal models when training and test
class distributions (or misclassification costs) are matched. Our result shows that model …

Aggregating performance metrics for classifier evaluation

N Seliya, TM Khoshgoftaar… - 2009 IEEE International …, 2009 - ieeexplore.ieee.org
There are several performance metrics that have been proposed for evaluating a
classification model, eg, accuracy, error rates, precision, recall, etc. While it is known that …

Class decomposition via clustering: a new framework for low-variance classifiers

R Vilalta, MK Achari, CF Eick - Third IEEE International …, 2003 - ieeexplore.ieee.org
We propose a preprocessing step to classification that applies a clustering algorithm to the
training set to discover local patterns in the attribute or input space. We demonstrate how …

'Propose and Review': Interactive Bias Mitigation for Machine Classifiers

T Li, Z Tang, T Lu, XM Zhang - Available at SSRN 4139244, 2022 - papers.ssrn.com
We develop a solution framework for mitigating algorithmic bias in machine-learning
classifiers. We consider an interactive problem setting where Alice (eg, the firm) proposes to …

Lossy Predictive Models for Accurate Classification Algorithms

A Moon, SW Son, H Kim, M Kim - 2022 IEEE International …, 2022 - ieeexplore.ieee.org
Recent years have witnessed an upsurge of interest in lossy compression due to its potential
to significantly reduce data volume with adequate exploitation of the spatiotemporal …