Threats, attacks, and defenses in machine unlearning: A survey

Z Liu, H Ye, C Chen, KY Lam - arXiv preprint arXiv:2403.13682, 2024 - arxiv.org
Recently, Machine Unlearning (MU) has gained considerable attention for its potential to
improve AI safety by removing the influence of specific data from trained Machine Learning …

Rethinking machine unlearning for large language models

S Liu, Y Yao, J Jia, S Casper, N Baracaldo… - arXiv preprint arXiv …, 2024 - arxiv.org
We explore machine unlearning (MU) in the domain of large language models (LLMs),
referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence …

SoK: Challenges and Opportunities in Federated Unlearning

H Jeong, S Ma, A Houmansadr - arXiv preprint arXiv:2403.02437, 2024 - arxiv.org
Federated learning (FL), introduced in 2017, facilitates collaborative learning between non-
trusting parties with no need for the parties to explicitly share their data among themselves …

Towards efficient and certified recovery from poisoning attacks in federated learning

Y Jiang, J Shen, Z Liu, CW Tan, KY Lam - arXiv preprint arXiv:2401.08216, 2024 - arxiv.org
Federated learning (FL) is vulnerable to poisoning attacks, where malicious clients
manipulate their updates to affect the global model. Although various methods exist for …

Soul: Unlocking the power of second-order optimization for llm unlearning

J Jia, Y Zhang, Y Zhang, J Liu, B Runwal… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have highlighted the necessity of effective unlearning
mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims …

Federated unlearning: A survey on methods, design guidelines, and evaluation metrics

N Romandini, A Mora, C Mazzocca… - arXiv preprint arXiv …, 2024 - arxiv.org
Federated Learning (FL) enables collaborative training of a Machine Learning (ML) model
across multiple parties, facilitating the preservation of users' and institutions' privacy by …

Privacy-Preserving Federated Unlearning with Certified Client Removal

Z Liu, H Ye, Y Jiang, J Shen, J Guo, I Tjuawinata… - arXiv preprint arXiv …, 2024 - arxiv.org
In recent years, Federated Unlearning (FU) has gained attention for addressing the removal
of a client's influence from the global model in Federated Learning (FL) systems, thereby …

Machine Unlearning: Taxonomy, Metrics, Applications, Challenges, and Prospects

N Li, C Zhou, Y Gao, H Chen, A Fu, Z Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Personal digital data is a critical asset, and governments worldwide have enforced laws and
regulations to protect data privacy. Data users have been endowed with the right to be …

Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security

Y Fan, Y Cao, Z Zhao, Z Liu, S Li - arXiv preprint arXiv:2404.05264, 2024 - arxiv.org
Multimodal Large Language Models (MLLMs) demonstrate remarkable capabilities that
increasingly influence various aspects of our daily lives, constantly defining the new …

Towards Federated Domain Unlearning: Verification Methodologies and Challenges

K Tam, K Xu, L Li, H Fu - arXiv preprint arXiv:2406.03078, 2024 - arxiv.org
Federated Learning (FL) has evolved as a powerful tool for collaborative model training
across multiple entities, ensuring data privacy in sensitive sectors such as healthcare and …