作者
Aditya Singh, Alessandro Bay, Biswa Sengupta, Andrea Mirabile
发表日期
2021
期刊
ICML Workshop on Uncertainty and Robustness in Deep Learning
简介
Modern neural networks are highly uncalibrated. It poses a significant challenge for safety-critical systems to utilise deep neural networks (DNNs), reliably. Many recently proposed approaches have demonstrated substantial progress in improving DNN calibration. However, they hardly touch upon refinement, which historically has been an essential aspect of calibration. We portray refinement as the separation between a DNN’s correct and incorrect predictions. In this paper, we empirically highlight the downside of many modern calibration techniques. We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the utility of a DNN by degrading its refinement. Even under natural data shift, this calibration-refinement trade-off holds for the majority of calibration methods. These findings call for an urgent retrospective into some popular pathways taken for modern DNN calibration.
引用总数
学术搜索中的文章
A Singh, A Bay, B Sengupta, A Mirabile - ICML Workshop on Uncertainty and Robustness in …, 2021