Byzantine machine learning: A primer

R Guerraoui, N Gupta, R Pinot - ACM Computing Surveys, 2024 - dl.acm.org
The problem of Byzantine resilience in distributed machine learning, aka Byzantine machine
learning, consists of designing distributed algorithms that can train an accurate model …

On the algorithmic stability of adversarial training

Y Xing, Q Song, G Cheng - Advances in neural information …, 2021 - proceedings.neurips.cc
The adversarial training is a popular tool to remedy the vulnerability of deep learning models
against adversarial attacks, and there is rich theoretical literature on the training loss of …

On the limitations of stochastic pre-processing defenses

Y Gao, I Shumailov, K Fawaz… - Advances in Neural …, 2022 - proceedings.neurips.cc
Defending against adversarial examples remains an open problem. A common belief is that
randomness at inference increases the cost of finding adversarial inputs. An example of …

Noisy feature mixup

SH Lim, NB Erichson, F Utrera, W Xu… - arXiv preprint arXiv …, 2021 - arxiv.org
We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data
augmentation that combines the best of interpolation based training and noise injection …

Implementing responsible AI: Tensions and trade-offs between ethics aspects

C Sanderson, D Douglas, Q Lu - 2023 International Joint …, 2023 - ieeexplore.ieee.org
Many sets of ethics principles for responsible AI have been proposed to allay concerns
about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles …

On the effectiveness of small input noise for defending against query-based black-box attacks

J Byun, H Go, C Kim - Proceedings of the IEEE/CVF winter …, 2022 - openaccess.thecvf.com
While deep neural networks show unprecedented performance in various tasks, the
vulnerability to adversarial examples hinders their deployment in safety-critical systems …

Adversarial attacks for mixtures of classifiers

LG Heredia, B Negrevergne, Y Chevaleyre - arXiv preprint arXiv …, 2023 - arxiv.org
Mixtures of classifiers (aka randomized ensembles) have been proposed as a way to
improve robustness against adversarial attacks. However, it has been shown that existing …

Optimal binary classification beyond accuracy

S Singh, JT Khim - Advances in Neural Information …, 2022 - proceedings.neurips.cc
The vast majority of statistical theory on binary classification characterizes performance in
terms of accuracy. However, accuracy is known in many cases to poorly reflect the practical …

On the role of randomization in adversarially robust classification

L Gnecco Heredia, MS Pydi… - Advances in …, 2023 - proceedings.neurips.cc
Deep neural networks are known to be vulnerable to small adversarial perturbations in test
data. To defend against adversarial attacks, probabilistic classifiers have been proposed as …

[PDF][PDF] Provable Robustness against Wasserstein Distribution Shifts via Input Randomization

A Kumar, A Levine, T Goldstein, S Feizi - 2023 - par.nsf.gov
Certified robustness in machine learning has primarily focused on adversarial perturbations
with a fixed attack budget for each sample in the input distribution. In this work, we present …