Poisoning web-scale training datasets is practical

N Carlini, M Jagielski… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Deep learning models are often trained on distributed, web-scale datasets crawled from the
internet. In this paper, we introduce two new dataset poisoning attacks that intentionally …

[HTML][HTML] Systematic review of deep learning solutions for malware detection and forensic analysis in IoT

SU Qureshi, J He, S Tunio, N Zhu, A Nazir… - Journal of King Saud …, 2024 - Elsevier
The swift proliferation of Internet of Things (IoT) devices has presented considerable
challenges in maintaining cybersecurity. As IoT ecosystems expand, they increasingly attract …

Cybersecurity and privacy in smart bioprinting

JC Isichei, S Khorsandroo, S Desai - Bioprinting, 2023 - Elsevier
Bioprinting is a versatile technology which is gaining rapid adoption in healthcare fields
such as tissue engineering, regenerative medicine, drug delivery, and surgical planning …

Mm-bd: Post-training detection of backdoor attacks with arbitrary backdoor pattern types using a maximum margin statistic

H Wang, Z Xiang, DJ Miller… - 2024 IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Backdoor attacks are an important type of adversarial threat against deep neural network
classifiers, wherein test samples from one or more source classes will be (mis) classified to …

“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

Machine unlearning: Solutions and challenges

J Xu, Z Wu, C Wang, X Jia - IEEE Transactions on Emerging …, 2024 - ieeexplore.ieee.org
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious
data, posing risks of privacy breaches, security vulnerabilities, and performance …

" Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security Conferences

D Olszewski, A Lu, C Stillman, K Warren… - Proceedings of the …, 2023 - dl.acm.org
Reproducibility is crucial to the advancement of science; it strengthens confidence in
seemingly contradictory results and expands the boundaries of known discoveries …

Just rotate it: Deploying backdoor attacks via rotation transformation

T Wu, T Wang, V Sehwag, S Mahloujifar… - Proceedings of the 15th …, 2022 - dl.acm.org
Recent works have demonstrated that deep learning models are vulnerable to backdoor
poisoning attacks, where these attacks instill spurious correlations to external trigger …

[PDF][PDF] Adversarial machine learning

A Vassilev, A Oprea, A Fordyce, H Anderson - Gaithersburg, MD, 2024 - site.unibo.it
Abstract This NIST Trustworthy and Responsible AI report develops a taxonomy of concepts
and defines terminology in the field of adversarial machine learning (AML). The taxonomy is …

Vulnerabilities in ai code generators: Exploring targeted data poisoning attacks

D Cotroneo, C Improta, P Liguori… - Proceedings of the 32nd …, 2024 - dl.acm.org
AI-based code generators have become pivotal in assisting developers in writing software
starting from natural language (NL). However, they are trained on large amounts of data …