Re-thinking data strategy and integration for artificial intelligence: concepts, opportunities, and challenges

A Aldoseri, KN Al-Khalifa, AM Hamouda - Applied Sciences, 2023 - mdpi.com
The use of artificial intelligence (AI) is becoming more prevalent across industries such as
healthcare, finance, and transportation. Artificial intelligence is based on the analysis of …

A comprehensive survey on poisoning attacks and countermeasures in machine learning

Z Tian, L Cui, J Liang, S Yu - ACM Computing Surveys, 2022 - dl.acm.org
The prosperity of machine learning has been accompanied by increasing attacks on the
training process. Among them, poisoning attacks have become an emerging threat during …

Studying large language model generalization with influence functions

R Grosse, J Bae, C Anil, N Elhage, A Tamkin… - arXiv preprint arXiv …, 2023 - arxiv.org
When trying to gain better visibility into a machine learning model in order to understand and
mitigate the associated risks, a potentially valuable source of evidence is: which training …

Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C Xie, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

Blind backdoors in deep learning models

E Bagdasaryan, V Shmatikov - 30th USENIX Security Symposium …, 2021 - usenix.org
We investigate a new method for injecting backdoors into machine learning models, based
on compromising the loss-value computation in the model-training code. We use it to …

Backdoor attacks and countermeasures on deep learning: A comprehensive review

Y Gao, BG Doan, Z Zhang, S Ma, J Zhang, A Fu… - arXiv preprint arXiv …, 2020 - arxiv.org
This work provides the community with a timely comprehensive review of backdoor attacks
and countermeasures on deep learning. According to the attacker's capability and affected …

Local and central differential privacy for robustness and privacy in federated learning

M Naseri, J Hayes, E De Cristofaro - arXiv preprint arXiv:2009.03561, 2020 - arxiv.org
Federated Learning (FL) allows multiple participants to train machine learning models
collaboratively by keeping their datasets local while only exchanging model updates. Alas …

Concealed data poisoning attacks on NLP models

E Wallace, TZ Zhao, S Feng, S Singh - arXiv preprint arXiv:2010.12563, 2020 - arxiv.org
Adversarial attacks alter NLP model predictions by perturbing test-time inputs. However, it is
much less understood whether, and how, predictions can be manipulated with small …

Property inference from poisoning

S Mahloujifar, E Ghosh, M Chase - 2022 IEEE Symposium on …, 2022 - ieeexplore.ieee.org
Property inference attacks consider an adversary who has access to a trained ML model and
tries to extract some global statistics of the training data. In this work, we study property …