Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Defense strategies for adversarial machine learning: A survey

P Bountakas, A Zarras, A Lekidis, C Xenakis - Computer Science Review, 2023 - Elsevier
Abstract Adversarial Machine Learning (AML) is a recently introduced technique, aiming to
deceive Machine Learning (ML) models by providing falsified inputs to render those models …

“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

Flirt: Feedback loop in-context red teaming

N Mehrabi, P Goyal, C Dupuy, Q Hu, S Ghosh… - arXiv preprint arXiv …, 2023 - arxiv.org
Warning: this paper contains content that may be inappropriate or offensive. As generative
models become available for public use in various applications, testing and analyzing …

Diffattack: Evasion attacks against diffusion-based adversarial purification

M Kang, D Song, B Li - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Diffusion-based purification defenses leverage diffusion models to remove crafted
perturbations of adversarial examples and achieve state-of-the-art robustness. Recent …

[HTML][HTML] A survey of malware detection using deep learning

A Bensaoud, J Kalita, M Bensaoud - Machine Learning With Applications, 2024 - Elsevier
The problem of malicious software (malware) detection and classification is a complex task,
and there is no perfect approach. There is still a lot of work to be done. Unlike most other …

Towards learning trustworthily, automatically, and with guarantees on graphs: An overview

L Oneto, N Navarin, B Biggio, F Errica, A Micheli… - Neurocomputing, 2022 - Elsevier
The increasing digitization and datification of all aspects of people's daily life, and the
consequent growth in the use of personal data, are increasingly challenging the current …

Increasing confidence in adversarial robustness evaluations

RS Zimmermann, W Brendel… - Advances in neural …, 2022 - proceedings.neurips.cc
Hundreds of defenses have been proposed to make deep neural networks robust against
minimal (adversarial) input perturbations. However, only a handful of these defenses held …

Adversarially robust spiking neural networks through conversion

O Özdenizci, R Legenstein - arXiv preprint arXiv:2311.09266, 2023 - arxiv.org
Spiking neural networks (SNNs) provide an energy-efficient alternative to a variety of
artificial neural network (ANN) based AI applications. As the progress in neuromorphic …

On the Convergence of an Adaptive Momentum Method for Adversarial Attacks

S Long, W Tao, LI Shuohao, J Lei… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Adversarial examples are commonly created by solving a constrained optimization problem,
typically using sign-based methods like Fast Gradient Sign Method (FGSM). These attacks …