Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Machine learning for security in vehicular networks: A comprehensive survey

A Talpur, M Gurusamy - IEEE Communications Surveys & …, 2021 - ieeexplore.ieee.org
Machine Learning (ML) has emerged as an attractive and viable technique to provide
effective solutions for a wide range of application domains. An important application domain …

Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks

Y Cao, N Wang, C Xiao, D Yang, J Fang… - … IEEE symposium on …, 2021 - ieeexplore.ieee.org
In Autonomous Driving (AD) systems, perception is both security and safety critical. Despite
various prior studies on its security issues, all of them only consider attacks on camera-or …

Physical attack on monocular depth estimation with optimal adversarial patches

Z Cheng, J Liang, H Choi, G Tao, Z Cao, D Liu… - European conference on …, 2022 - Springer
Deep learning has substantially boosted the performance of Monocular Depth Estimation
(MDE), a critical component in fully vision-based autonomous driving (AD) systems (eg …

Robust deep reinforcement learning against adversarial perturbations on state observations

H Zhang, H Chen, C Xiao, B Li, M Liu… - Advances in …, 2020 - proceedings.neurips.cc
A deep reinforcement learning (DRL) agent observes its states through observations, which
may contain natural measurement errors or adversarial noises. Since the observations …

Robust reinforcement learning on state observations with learned optimal adversary

H Zhang, H Chen, D Boning, CJ Hsieh - arXiv preprint arXiv:2101.08452, 2021 - arxiv.org
We study the robustness of reinforcement learning (RL) with adversarially perturbed state
observations, which aligns with the setting of many adversarial attacks to deep …

Efficient adversarial training without attacking: Worst-case-aware robust reinforcement learning

Y Liang, Y Sun, R Zheng… - Advances in Neural …, 2022 - proceedings.neurips.cc
Recent studies reveal that a well-trained deep reinforcement learning (RL) policy can be
particularly vulnerable to adversarial perturbations on input observations. Therefore, it is …

Semanticadv: Generating adversarial examples via attribute-conditioned image editing

H Qiu, C Xiao, L Yang, X Yan, H Lee, B Li - Computer Vision–ECCV 2020 …, 2020 - Springer
Recent studies have shown that DNNs are vulnerable to adversarial examples which are
manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently …

Challenges and countermeasures for adversarial attacks on deep reinforcement learning

I Ilahi, M Usama, J Qadir, MU Janjua… - IEEE Transactions …, 2021 - ieeexplore.ieee.org
Deep reinforcement learning (DRL) has numerous applications in the real world, thanks to
its ability to achieve high performance in a range of environments with little manual …

Who is the strongest enemy? towards optimal and efficient evasion attacks in deep rl

Y Sun, R Zheng, Y Liang, F Huang - arXiv preprint arXiv:2106.05087, 2021 - arxiv.org
Evaluating the worst-case performance of a reinforcement learning (RL) agent under the
strongest/optimal adversarial perturbations on state observations (within some constraints) …