A Survey of Trustworthy Federated Learning: Issues, Solutions, and Challenges

Y Zhang, D Zeng, J Luo, X Fu, G Chen, Z Xu… - ACM Transactions on …, 2024 - dl.acm.org
Trustworthy Artificial Intelligence (TAI) has proven invaluable in curbing potential negative
repercussions tied to AI applications. Within the TAI spectrum, Federated Learning (FL) …

[PDF][PDF] Adversarial machine learning

A Vassilev, A Oprea, A Fordyce, H Anderson - Gaithersburg, MD, 2024 - site.unibo.it
Abstract This NIST Trustworthy and Responsible AI report develops a taxonomy of concepts
and defines terminology in the field of adversarial machine learning (AML). The taxonomy is …

iBA: Backdoor Attack on 3D Point Cloud via Reconstructing Itself

Y Bian, S Tian, X Liu - IEEE Transactions on Information …, 2024 - ieeexplore.ieee.org
The widespread deployment of Deep Neural Networks (DNNs) for 3D point cloud
processing contrasts sharply with their vulnerability to security breaches, particularly …

Injecting Undetectable Backdoors in Deep Learning and Language Models

A Kalavasis, A Karbasi, A Oikonomou, K Sotiraki… - arXiv preprint arXiv …, 2024 - arxiv.org
As ML models become increasingly complex and integral to high-stakes domains such as
finance and healthcare, they also become more susceptible to sophisticated adversarial …

Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack

M Zhu, S Liang, B Wu - arXiv preprint arXiv:2405.16134, 2024 - arxiv.org
Deep neural networks face persistent challenges in defending against backdoor attacks,
leading to an ongoing battle between attacks and defenses. While existing backdoor …

How to Train a Backdoor-Robust Model on a Poisoned Dataset without Auxiliary Data?

Y Pu, J Chen, C Zhou, Z Feng, Q Li, C Hu… - arXiv preprint arXiv …, 2024 - arxiv.org
Backdoor attacks have attracted wide attention from academia and industry due to their
great security threat to deep neural networks (DNN). Most of the existing methods propose to …

Backdoor Defense through Self-Supervised and Generative Learning

I Sabolić, I Grubišić, S Šegvić - arXiv preprint arXiv:2409.01185, 2024 - arxiv.org
Backdoor attacks change a small portion of training data by introducing hand-crafted triggers
and rewiring the corresponding labels towards a desired target class. Training on such data …

Protecting against simultaneous data poisoning attacks

N Alex, SA Siddiqui, A Sanyal, D Krueger - arXiv preprint arXiv …, 2024 - arxiv.org
Current backdoor defense methods are evaluated against a single attack at a time. This is
unrealistic, as powerful machine learning systems are trained on large datasets scraped …

Graph Neural Backdoor: Fundamentals, Methodologies, Applications, and Future Directions

X Yang, G Li, J Li - arXiv preprint arXiv:2406.10573, 2024 - arxiv.org
Graph Neural Networks (GNNs) have significantly advanced various downstream graph-
relevant tasks, encompassing recommender systems, molecular structure prediction, social …

Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning Pipelines

E Bagdasaryan, V Shmatikov - arXiv preprint arXiv:2302.04977, 2023 - arxiv.org
Machine learning (ML) models trained on data from potentially untrusted sources are
vulnerable to poisoning. A small, maliciously crafted subset of the training inputs can cause …