A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Applications of deep learning in intelligent transportation systems

AK Haghighat, V Ravichandra-Mouli… - Journal of Big Data …, 2020 - Springer
Abstract In recent years, Intelligent Transportation Systems (ITS) have seen efficient and
faster development by implementing deep learning techniques in problem domains which …

Red teaming language models with language models

E Perez, S Huang, F Song, T Cai, R Ring… - arXiv preprint arXiv …, 2022 - arxiv.org
Language Models (LMs) often cannot be deployed because of their potential to harm users
in hard-to-predict ways. Prior work identifies harmful behaviors before deployment by using …

Prognostics and Health Management (PHM): Where are we and where do we (need to) go in theory and practice

E Zio - Reliability Engineering & System Safety, 2022 - Elsevier
We are performing the digital transition of industry, living the 4th industrial revolution,
building a new World in which the digital, physical and human dimensions are interrelated in …

Targeted backdoor attacks on deep learning systems using data poisoning

X Chen, C Liu, B Li, K Lu, D Song - arXiv preprint arXiv:1712.05526, 2017 - arxiv.org
Deep learning models have achieved high performance on many tasks, and thus have been
applied to many security-critical scenarios. For example, deep learning-based face …

Audio adversarial examples: Targeted attacks on speech-to-text

N Carlini, D Wagner - 2018 IEEE security and privacy …, 2018 - ieeexplore.ieee.org
We construct targeted audio adversarial examples on automatic speech recognition. Given
any audio waveform, we can produce another that is over 99.9% similar, but transcribes as …

Certifying some distributional robustness with principled adversarial training

A Sinha, H Namkoong, R Volpi, J Duchi - arXiv preprint arXiv:1710.10571, 2017 - arxiv.org
Neural networks are vulnerable to adversarial examples and researchers have proposed
many heuristic attack and defense mechanisms. We address this problem through the …

Improving adversarial robustness via promoting ensemble diversity

T Pang, K Xu, C Du, N Chen… - … Conference on Machine …, 2019 - proceedings.mlr.press
Though deep neural networks have achieved significant progress on various tasks, often
enhanced by model ensemble, existing high-performance models can be vulnerable to …

Black-box generation of adversarial text sequences to evade deep learning classifiers

J Gao, J Lanchantin, ML Soffa… - 2018 IEEE Security and …, 2018 - ieeexplore.ieee.org
Although various techniques have been proposed to generate adversarial samples for white-
box attacks on text, little attention has been paid to a black-box attack, which is a more …

Delving into transferable adversarial examples and black-box attacks

Y Liu, X Chen, C Liu, D Song - arXiv preprint arXiv:1611.02770, 2016 - arxiv.org
An intriguing property of deep neural networks is the existence of adversarial examples,
which can transfer among different architectures. These transferable adversarial examples …