Backdoor attacks and countermeasures on deep learning: A comprehensive review

Y Gao, BG Doan, Z Zhang, S Ma, J Zhang, A Fu… - arXiv preprint arXiv …, 2020 - arxiv.org
This work provides the community with a timely comprehensive review of backdoor attacks
and countermeasures on deep learning. According to the attacker's capability and affected …

Artificial intelligence security: Threats and countermeasures

Y Hu, W Kuang, Z Qin, K Li, J Zhang, Y Gao… - ACM Computing …, 2021 - dl.acm.org
In recent years, with rapid technological advancement in both computing hardware and
algorithm, Artificial Intelligence (AI) has demonstrated significant advantage over human …

On aliased resizing and surprising subtleties in gan evaluation

G Parmar, R Zhang, JY Zhu - Proceedings of the IEEE/CVF …, 2022 - openaccess.thecvf.com
Metrics for evaluating generative models aim to measure the discrepancy between real and
generated images. The oftenused Frechet Inception Distance (FID) metric, for example …

Dos and don'ts of machine learning in computer security

D Arp, E Quiring, F Pendlebury, A Warnecke… - 31st USENIX Security …, 2022 - usenix.org
With the growing processing power of computing systems and the increasing availability of
massive datasets, machine learning algorithms have led to major breakthroughs in many …

[HTML][HTML] Generative AI in medical practice: in-depth exploration of privacy and security challenges

Y Chen, P Esmaeilzadeh - Journal of Medical Internet Research, 2024 - jmir.org
As advances in artificial intelligence (AI) continue to transform and revolutionize the field of
medicine, understanding the potential uses of generative AI in health care becomes …

“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

Hidden backdoors in human-centric language models

S Li, H Liu, T Dong, BZH Zhao, M Xue, H Zhu… - Proceedings of the 2021 …, 2021 - dl.acm.org
Natural language processing (NLP) systems have been proven to be vulnerable to backdoor
attacks, whereby hidden features (backdoors) are trained into a language model and may …

Fighting COVID-19 and future pandemics with the Internet of Things: Security and privacy perspectives

MA Ferrag, L Shu, KKR Choo - IEEE/CAA Journal of …, 2021 - ieeexplore.ieee.org
The speed and pace of the transmission of severe acute respiratory syndrome coronavirus 2
(SARS-CoV-2; also referred to as novel Coronavirus 2019 and COVID-19) have resulted in …

Risk taxonomy, mitigation, and assessment benchmarks of large language model systems

T Cui, Y Wang, C Fu, Y Xiao, S Li, X Deng, Y Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) have strong capabilities in solving diverse natural language
processing tasks. However, the safety and security issues of LLM systems have become the …

Artificial intelligence for safety-critical systems in industrial and transportation domains: A survey

J Perez-Cerrolaza, J Abella, M Borg, C Donzella… - ACM Computing …, 2024 - dl.acm.org
Artificial Intelligence (AI) can enable the development of next-generation autonomous safety-
critical systems in which Machine Learning (ML) algorithms learn optimized and safe …