Test-time poisoning attacks against test-time adaptation models

T Cong, X He, Y Shen, Y Zhang - arXiv preprint arXiv:2308.08505, 2023 - arxiv.org
Deploying machine learning (ML) models in the wild is challenging as it suffers from
distribution shifts, where the model trained on an original domain cannot generalize well to …

Predicting the Performance of Foundation Models via Agreement-on-the-Line

A Mehra, R Saxena, T Kim, C Baek, Z Kolter… - arXiv preprint arXiv …, 2024 - arxiv.org
Estimating the out-of-distribution performance in regimes where labels are scarce is critical
to safely deploy foundation models. Recently, it was shown that ensembles of neural …

Optimizing Calibration by Gaining Aware of Prediction Correctness

Y Liu, L Wang, Y Zou, J Zou, L Zheng - arXiv preprint arXiv:2404.13016, 2024 - arxiv.org
Model calibration aims to align confidence with prediction correctness. The Cross-Entropy
CE) loss is widely used for calibrator training, which enforces the model to increase …

Pseudo-Calibration: Improving Predictive Uncertainty Estimation in Unsupervised Domain Adaptation

D Hu, J Liang, X Wang, CS Foo - Forty-first International Conference on … - openreview.net
Unsupervised domain adaptation (UDA) has seen substantial efforts to improve model
accuracy for an unlabeled target domain with the help of a labeled source domain. However …