Byzantine machine learning: A primer

R Guerraoui, N Gupta, R Pinot - ACM Computing Surveys, 2024 - dl.acm.org
The problem of Byzantine resilience in distributed machine learning, aka Byzantine machine
learning, consists of designing distributed algorithms that can train an accurate model …

[HTML][HTML] A comprehensive survey of robust deep learning in computer vision

J Liu, Y Jin - Journal of Automation and Intelligence, 2023 - Elsevier
Deep learning has presented remarkable progress in various tasks. Despite the excellent
performance, deep learning models remain not robust, especially to well-designed …

A closer look at accuracy vs. robustness

YY Yang, C Rashtchian, H Zhang… - Advances in neural …, 2020 - proceedings.neurips.cc
Current methods for training robust networks lead to a drop in test accuracy, which has led
prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning …

How does information bottleneck help deep learning?

K Kawaguchi, Z Deng, X Ji… - … Conference on Machine …, 2023 - proceedings.mlr.press
Numerous deep learning algorithms have been inspired by and understood via the notion of
information bottleneck, where unnecessary information is (often implicitly) minimized while …

Pointguard: Provably robust 3d point cloud classification

H Liu, J Jia, NZ Gong - … of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Abstract 3D point cloud classification has many safety-critical applications such as
autonomous driving and robotic grasping. However, several studies showed that it is …

A dynamical system perspective for lipschitz neural networks

L Meunier, BJ Delattre, A Araujo… - … on Machine Learning, 2022 - proceedings.mlr.press
The Lipschitz constant of neural networks has been established as a key quantity to enforce
the robustness to adversarial examples. In this paper, we tackle the problem of building $1 …

Understanding instance-level impact of fairness constraints

J Wang, XE Wang, Y Liu - International Conference on …, 2022 - proceedings.mlr.press
A variety of fairness constraints have been proposed in the literature to mitigate group-level
statistical bias. Their impacts have been largely evaluated for different groups of populations …

Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing

J Jia, X Cao, B Wang, NZ Gong - arXiv preprint arXiv:1912.09899, 2019 - arxiv.org
It is well-known that classifiers are vulnerable to adversarial perturbations. To defend
against adversarial perturbations, various certified robustness results have been derived …

Learn2perturb: an end-to-end feature perturbation learning to improve adversarial robustness

A Jeddi, MJ Shafiee, M Karg… - Proceedings of the …, 2020 - openaccess.thecvf.com
While deep neural networks have been achieving state-of-the-art performance across a
wide variety of applications, their vulnerability to adversarial attacks limits their widespread …

Provably efficient black-box action poisoning attacks against reinforcement learning

G Liu, L Lai - Advances in Neural Information Processing …, 2021 - proceedings.neurips.cc
Due to the broad range of applications of reinforcement learning (RL), understanding the
effects of adversarial attacks against RL model is essential for the safe applications of this …