When machine learning meets privacy: A survey and outlook

B Liu, M Ding, S Shaham, W Rahayu… - ACM Computing …, 2021 - dl.acm.org
The newly emerged machine learning (eg, deep learning) methods have become a strong
driving force to revolutionize a wide range of industries, such as smart healthcare, financial …

Ai-based intrusion detection systems for in-vehicle networks: A survey

S Rajapaksha, H Kalutarage, MO Al-Kadri… - ACM Computing …, 2023 - dl.acm.org
The Controller Area Network (CAN) is the most widely used in-vehicle communication
protocol, which still lacks the implementation of suitable security mechanisms such as …

Poisoning web-scale training datasets is practical

N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

Lira: Learnable, imperceptible and robust backdoor attacks

K Doan, Y Lao, W Zhao, P Li - Proceedings of the IEEE/CVF …, 2021 - openaccess.thecvf.com
Recently, machine learning models have demonstrated to be vulnerable to backdoor
attacks, primarily due to the lack of transparency in black-box models such as deep neural …

Wanet--imperceptible warping-based backdoor attack

A Nguyen, A Tran - arXiv preprint arXiv:2102.10369, 2021 - arxiv.org
With the thriving of deep learning and the widespread practice of using pre-trained networks,
backdoor attacks have become an increasing security threat drawing many research …

Input-aware dynamic backdoor attack

TA Nguyen, A Tran - Advances in Neural Information …, 2020 - proceedings.neurips.cc
In recent years, neural backdoor attack has been considered to be a potential security threat
to deep learning systems. Such systems, while achieving the state-of-the-art performance on …

Trojdiff: Trojan attacks on diffusion models with diverse targets

W Chen, D Song, B Li - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Diffusion models have achieved great success in a range of tasks, such as image synthesis
and molecule design. As such successes hinge on large-scale training data collected from …

Badencoder: Backdoor attacks to pre-trained encoders in self-supervised learning

J Jia, Y Liu, NZ Gong - 2022 IEEE Symposium on Security and …, 2022 - ieeexplore.ieee.org
Self-supervised learning in computer vision aims to pre-train an image encoder using a
large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can …

Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning

Z Wang, J Zhai, S Ma - … of the IEEE/CVF Conference on …, 2022 - openaccess.thecvf.com
Deep neural networks are vulnerable to Trojan attacks. Existing attacks use visible patterns
(eg, a patch or image transformations) as triggers, which are vulnerable to human …

Blind backdoors in deep learning models

E Bagdasaryan, V Shmatikov - 30th USENIX Security Symposium …, 2021 - usenix.org
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …