Threshold optimization and random undersampling for imbalanced credit card data

JL Leevy, JM Johnson, J Hancock, TM Khoshgoftaar - Journal of Big Data, 2023 - Springer
Journal of Big Data, 2023Springer
Output thresholding is well-suited for addressing class imbalance, since the technique does
not increase dataset size, run the risk of discarding important instances, or modify an
existing learner. Through the use of the Credit Card Fraud Detection Dataset, this study
proposes a threshold optimization approach that factors in the constraint True Positive Rate
(TPR)≥ True Negative Rate (TNR). Our findings indicate that an increase of the Area Under
the Precision–Recall Curve (AUPRC) score is associated with an improvement in threshold …
Abstract
Output thresholding is well-suited for addressing class imbalance, since the technique does not increase dataset size, run the risk of discarding important instances, or modify an existing learner. Through the use of the Credit Card Fraud Detection Dataset, this study proposes a threshold optimization approach that factors in the constraint True Positive Rate (TPR) ≥ True Negative Rate (TNR). Our findings indicate that an increase of the Area Under the Precision–Recall Curve (AUPRC) score is associated with an improvement in threshold-based classification scores, while an increase of positive class prior probability causes optimal thresholds to increase. In addition, we discovered that best overall results for the selection of an optimal threshold are obtained without the use of Random Undersampling (RUS). Furthermore, with the exception of AUPRC, we established that the default threshold yields good performance scores at a balanced class ratio. Our evaluation of four threshold optimization techniques, eight threshold-dependent metrics, and two threshold-agnostic metrics defines the uniqueness of this research.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果