A survey on adversarial attacks and defences

A Chakraborty, M Alam, V Dey… - CAAI Transactions …, 2021 - Wiley Online Library
Deep learning has evolved as a strong and efficient framework that can be applied to a
broad spectrum of complex learning problems which were difficult to solve using the …

Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Detecting backdoor attacks on deep neural networks by activation clustering

B Chen, W Carvalho, N Baracaldo, H Ludwig… - arXiv preprint arXiv …, 2018 - arxiv.org
While machine learning (ML) models are being increasingly trusted to make decisions in
different and varying areas, the safety of systems using such models has become an …

Adversarial attacks and defences: A survey

A Chakraborty, M Alam, V Dey… - arXiv preprint arXiv …, 2018 - arxiv.org
Deep learning has emerged as a strong and efficient framework that can be applied to a
broad spectrum of complex learning problems which were difficult to solve using the …

Wild patterns: Ten years after the rise of adversarial machine learning

B Biggio, F Roli - Proceedings of the 2018 ACM SIGSAC Conference on …, 2018 - dl.acm.org
Deep neural networks and machine-learning algorithms are pervasively used in several
applications, ranging from computer vision to computer security. In most of these …

Poisoning and backdooring contrastive learning

N Carlini, A Terzis - arXiv preprint arXiv:2106.09667, 2021 - arxiv.org
Multimodal contrastive learning methods like CLIP train on noisy and uncurated training
datasets. This is cheaper than labeling datasets manually, and even improves out-of …

Review of artificial intelligence adversarial attack and defense technologies

S Qiu, Q Liu, S Zhou, C Wu - Applied Sciences, 2019 - mdpi.com
In recent years, artificial intelligence technologies have been widely used in computer
vision, natural language processing, automatic driving, and other fields. However, artificial …

Stealing hyperparameters in machine learning

B Wang, NZ Gong - 2018 IEEE symposium on security and …, 2018 - ieeexplore.ieee.org
Hyperparameters are critical in machine learning, as different hyperparameters often result
in models with significantly different performance. Hyperparameters may be deemed …

Stealing machine learning models via prediction {APIs}

F Tramèr, F Zhang, A Juels, MK Reiter… - 25th USENIX security …, 2016 - usenix.org
Machine learning (ML) models may be deemed confidential due to their sensitive training
data, commercial value, or use in security applications. Increasingly often, confidential ML …

Transferability in machine learning: from phenomena to black-box attacks using adversarial samples

N Papernot, P McDaniel, I Goodfellow - arXiv preprint arXiv:1605.07277, 2016 - arxiv.org
Many machine learning models are vulnerable to adversarial examples: inputs that are
specially crafted to cause a machine learning model to produce an incorrect output …