Manipulating recommender systems: A survey of poisoning attacks and countermeasures

TT Nguyen, N Quoc Viet hung, TT Nguyen… - ACM Computing …, 2024 - dl.acm.org
Recommender systems have become an integral part of online services due to their ability to
help users locate specific information in a sea of data. However, existing studies show that …

Latest trends of security and privacy in recommender systems: a comprehensive review and future perspectives

Y Himeur, SS Sohail, F Bensaali, A Amira… - Computers & Security, 2022 - Elsevier
With the widespread use of Internet of things (IoT), mobile phones, connected devices and
artificial intelligence (AI), recommender systems (RSs) have become a booming technology …

Fltrust: Byzantine-robust federated learning via trust bootstrapping

X Cao, M Fang, J Liu, NZ Gong - arXiv preprint arXiv:2012.13995, 2020 - arxiv.org
Byzantine-robust federated learning aims to enable a service provider to learn an accurate
global model when a bounded number of clients are malicious. The key idea of existing …

Studying large language model generalization with influence functions

R Grosse, J Bae, C Anil, N Elhage, A Tamkin… - arXiv preprint arXiv …, 2023 - arxiv.org
When trying to gain better visibility into a machine learning model in order to understand and
mitigate the associated risks, a potentially valuable source of evidence is: which training …

Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses

M Goldblum, D Tsipras, C Xie, X Chen… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
As machine learning systems grow in scale, so do their training data requirements, forcing
practitioners to automate and outsource the curation of training data in order to achieve state …

A survey on trustworthy recommender systems

Y Ge, S Liu, Z Fu, J Tan, Z Li, S Xu, Y Li, Y Xian… - ACM Transactions on …, 2024 - dl.acm.org
Recommender systems (RS), serving at the forefront of Human-centered AI, are widely
deployed in almost every corner of the web and facilitate the human decision-making …

Adversarial examples make strong poisons

L Fowl, M Goldblum, P Chiang… - Advances in …, 2021 - proceedings.neurips.cc
The adversarial machine learning literature is largely partitioned into evasion attacks on
testing data and poisoning attacks on training data. In this work, we show that adversarial …

Local and central differential privacy for robustness and privacy in federated learning

M Naseri, J Hayes, E De Cristofaro - arXiv preprint arXiv:2009.03561, 2020 - arxiv.org
Federated Learning (FL) allows multiple participants to train machine learning models
collaboratively by keeping their datasets local while only exchanging model updates. Alas …

Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks

A Schwarzschild, M Goldblum… - International …, 2021 - proceedings.mlr.press
Data poisoning and backdoor attacks manipulate training data in order to cause models to
fail during inference. A recent survey of industry practitioners found that data poisoning is the …

Deep model poisoning attack on federated learning

X Zhou, M Xu, Y Wu, N Zheng - Future Internet, 2021 - mdpi.com
Federated learning is a novel distributed learning framework, which enables thousands of
participants to collaboratively construct a deep learning model. In order to protect …