A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Digital twin: A comprehensive survey of security threats

C Alcaraz, J Lopez - IEEE Communications Surveys & Tutorials, 2022 - ieeexplore.ieee.org
Industry 4.0 is having an increasingly positive impact on the value chain by modernizing and
optimizing the production and distribution processes. In this streamline, the digital twin (DT) …

Data poisoning attacks against federated learning systems

V Tolpegin, S Truex, ME Gursoy, L Liu - … 14–18, 2020, proceedings, part i …, 2020 - Springer
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep
neural networks in which participants' data remains on their own devices with only model …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C Xie, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

A survey on security threats and defensive techniques of machine learning: A data driven view

Q Liu, P Li, W Zhao, W Cai, S Yu, VCM Leung - IEEE access, 2018 - ieeexplore.ieee.org
Machine learning is one of the most prevailing techniques in computer science, and it has
been widely applied in image processing, natural language processing, pattern recognition …

Machine learning in cybersecurity: a comprehensive survey

D Dasgupta, Z Akhtar, S Sen - The Journal of Defense …, 2022 - journals.sagepub.com
Today's world is highly network interconnected owing to the pervasiveness of small personal
devices (eg, smartphones) as well as large computing devices or services (eg, cloud …

Data poisoning against differentially-private learners: Attacks and defenses

Y Ma, X Zhu, J Hsu - arXiv preprint arXiv:1903.09860, 2019 - arxiv.org
Data poisoning attacks aim to manipulate the model produced by a learning algorithm by
adversarially modifying the training set. We consider differential privacy as a defensive …

On the effectiveness of mitigating data poisoning attacks with gradient shaping

S Hong, V Chandrasekaran, Y Kaya, T Dumitraş… - arXiv preprint arXiv …, 2020 - arxiv.org
Machine learning algorithms are vulnerable to data poisoning attacks. Prior taxonomies that
focus on specific scenarios, eg, indiscriminate or targeted, have enabled defenses for the …

Threats to training: A survey of poisoning attacks and defenses on machine learning systems

Z Wang, J Ma, X Wang, J Hu, Z Qin, K Ren - ACM Computing Surveys, 2022 - dl.acm.org
Machine learning (ML) has been universally adopted for automated decisions in a variety of
fields, including recognition and classification applications, recommendation systems …

Better safe than sorry: Preventing delusive adversaries with adversarial training

L Tao, L Feng, J Yi, SJ Huang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by
slightly perturbing the features of correctly labeled training examples. By formalizing this …