A survey on federated unlearning: Challenges, methods, and future directions

Z Liu, Y Jiang, J Shen, M Peng, KY Lam… - ACM Computing …, 2023 - dl.acm.org
In recent years, the notion of “the right to be forgotten”(RTBF) has become a crucial aspect of
data privacy for digital trust and AI safety, requiring the provision of mechanisms that support …

Privacy-Preserving Federated Unlearning with Certified Client Removal

Z Liu, H Ye, Y Jiang, J Shen, J Guo, I Tjuawinata… - arXiv preprint arXiv …, 2024 - arxiv.org
In recent years, Federated Unlearning (FU) has gained attention for addressing the removal
of a client's influence from the global model in Federated Learning (FL) systems, thereby …

Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security

Y Fan, Y Cao, Z Zhao, Z Liu, S Li - arXiv preprint arXiv:2404.05264, 2024 - arxiv.org
Multimodal Large Language Models (MLLMs) demonstrate remarkable capabilities that
increasingly influence various aspects of our daily lives, constantly defining the new …

Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable

M Bertran, S Tang, M Kearns, J Morgenstern… - arXiv preprint arXiv …, 2024 - arxiv.org
Machine unlearning is motivated by desire for data autonomy: a person can request to have
their data's influence removed from deployed models, and those models should be updated …

Guaranteeing Data Privacy in Federated Unlearning with Dynamic User Participation

Z Liu, Y Jiang, W Jiang, J Guo, J Zhao… - arXiv preprint arXiv …, 2024 - arxiv.org
Federated Unlearning (FU) is gaining prominence for its capacity to eliminate influences of
Federated Learning (FL) users' data from trained global FL models. A straightforward FU …

Trustworthy, Responsible, and Safe AI: A Comprehensive Architectural Framework for AI Safety with Challenges and Mitigations

C Chen, Z Liu, W Jiang, GS Qi, KKY Lam - arXiv preprint arXiv:2408.12935, 2024 - arxiv.org
AI Safety is an emerging area of critical importance to the safe adoption and deployment of
AI systems. With the rapid proliferation of AI and especially with the recent advancement of …

Textual Unlearning Gives a False Sense of Unlearning

J Du, Z Wang, K Ren - arXiv preprint arXiv:2406.13348, 2024 - arxiv.org
Language models (LMs) are susceptible to" memorizing" training data, including a large
amount of private or copyright-protected content. To safeguard the right to be forgotten …