Advances in adversarial attacks and defenses in computer vision: A survey

N Akhtar, A Mian, N Kardan, M Shah - IEEE Access, 2021 - ieeexplore.ieee.org
Deep Learning is the most widely used tool in the contemporary field of computer vision. Its
ability to accurately solve complex problems is employed in vision research to learn deep …

Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Threat of adversarial attacks on deep learning in computer vision: A survey

N Akhtar, A Mian - Ieee Access, 2018 - ieeexplore.ieee.org
Deep learning is at the heart of the current rise of artificial intelligence. In the field of
computer vision, it has become the workhorse for applications ranging from self-driving cars …

Efficient adversarial training without attacking: Worst-case-aware robust reinforcement learning

Y Liang, Y Sun, R Zheng… - Advances in Neural …, 2022 - proceedings.neurips.cc
Recent studies reveal that a well-trained deep reinforcement learning (RL) policy can be
particularly vulnerable to adversarial perturbations on input observations. Therefore, it is …

Corruption-robust offline reinforcement learning with general function approximation

C Ye, R Yang, Q Gu, T Zhang - Advances in Neural …, 2024 - proceedings.neurips.cc
We investigate the problem of corruption robustness in offline reinforcement learning (RL)
with general function approximation, where an adversary can corrupt each sample in the …

Reinforcement learning for feedback-enabled cyber resilience

Y Huang, L Huang, Q Zhu - Annual reviews in control, 2022 - Elsevier
The rapid growth in the number of devices and their connectivity has enlarged the attack
surface and made cyber systems more vulnerable. As attackers become increasingly …

Trustworthy reinforcement learning against intrinsic vulnerabilities: Robustness, safety, and generalizability

M Xu, Z Liu, P Huang, W Ding, Z Cen, B Li… - arXiv preprint arXiv …, 2022 - arxiv.org
A trustworthy reinforcement learning algorithm should be competent in solving challenging
real-world problems, including {robustly} handling uncertainties, satisfying {safety} …

Learning to attack federated learning: A model-based reinforcement learning attack framework

H Li, X Sun, Z Zheng - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We propose a model-based reinforcement learning framework to derive untargeted
poisoning attacks against federated learning (FL) systems. Our framework first approximates …

Who is the strongest enemy? towards optimal and efficient evasion attacks in deep rl

Y Sun, R Zheng, Y Liang, F Huang - arXiv preprint arXiv:2106.05087, 2021 - arxiv.org
Evaluating the worst-case performance of a reinforcement learning (RL) agent under the
strongest/optimal adversarial perturbations on state observations (within some constraints) …

Trusted AI in multiagent systems: An overview of privacy and security for distributed learning

C Ma, J Li, K Wei, B Liu, M Ding, L Yuan… - Proceedings of the …, 2023 - ieeexplore.ieee.org
Motivated by the advancing computational capacity of distributed end-user equipment (UE),
as well as the increasing concerns about sharing private data, there has been considerable …