Adversarial examples: A survey of attacks and defenses in deep learning-enabled cybersecurity systems

M Macas, C Wu, W Fuertes - Expert Systems with Applications, 2024 - Elsevier
Over the last few years, the adoption of machine learning in a wide range of domains has
been remarkable. Deep learning, in particular, has been extensively used to drive …

Adversarial attack and defense strategies of speaker recognition systems: A survey

H Tan, L Wang, H Zhang, J Zhang, M Shafiq, Z Gu - Electronics, 2022 - mdpi.com
Speaker recognition is a task that identifies the speaker from multiple audios. Recently,
advances in deep learning have considerably boosted the development of speech signal …

Towards understanding and mitigating audio adversarial examples for speaker recognition

G Chen, Z Zhao, F Song, S Chen, L Fan… - … on Dependable and …, 2022 - ieeexplore.ieee.org
Speaker recognition systems (SRSs) have recently been shown to be vulnerable to
adversarial attacks, raising significant security concerns. In this work, we systematically …

AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems

G Chen, Z Zhao, F Song, S Chen… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Recent work has illuminated the vulnerability of speaker recognition systems (SRSs) against
adversarial attacks, raising significant security concerns in deploying SRSs. However, they …

Voiceblock: Privacy through real-time adversarial attacks with audio-to-audio models

P O'Reilly, A Bugler, K Bhandari… - Advances in Neural …, 2022 - proceedings.neurips.cc
As governments and corporations adopt deep learning systems to collect and analyze user-
generated audio data, concerns about security and privacy naturally emerge in areas such …

Malacopula: Adversarial automatic speaker verification attacks using a neural-based generalised hammerstein model

M Todisco, M Panariello, X Wang, H Delgado… - arXiv preprint arXiv …, 2024 - arxiv.org
We present Malacopula, a neural-based generalised Hammerstein model designed to
introduce adversarial perturbations to spoofed speech utterances so that they better deceive …

Waveform level adversarial example generation for joint attacks against both automatic speaker verification and spoofing countermeasures

X Zhang, X Zhang, W Liu, X Zou, M Sun… - Engineering Applications of …, 2022 - Elsevier
Adversarial examples crafted to deceive Automatic Speaker Verification (ASV) systems have
attracted a lot of attention when studying the vulnerability of ASV. However, real-world ASV …

PhoneyTalker: An out-of-the-box toolkit for adversarial example attack on speaker recognition

M Chen, L Lu, Z Ba, K Ren - IEEE INFOCOM 2022-IEEE …, 2022 - ieeexplore.ieee.org
Voice has become a fundamental method for human-computer interactions and person
identification these days. Benefit from the rapid development of deep learning, speaker …

Adversarial examples in the physical world: A survey

J Wang, X Liu, J Hu, D Wang, S Wu, T Jiang… - arXiv preprint arXiv …, 2023 - arxiv.org
Deep neural networks (DNNs) have demonstrated high vulnerability to adversarial
examples, raising broad security concerns about their applications. Besides the attacks in …

Enrollment-stage backdoor attacks on speaker recognition systems via adversarial ultrasound

X Li, J Ze, C Yan, Y Cheng, X Ji… - IEEE Internet of Things …, 2023 - ieeexplore.ieee.org
Automatic speaker recognition systems (SRSs) have been widely used in voice applications
for personal identification and access control. A typical SRS consists of three stages, ie …