In recent years, the notion of “the right to be forgotten”(RTBF) has become a crucial aspect of data privacy for digital trust and AI safety, requiring the provision of mechanisms that support …
Today, computer systems hold large amounts of personal data. Yet while such an abundance of data allows breakthroughs in artificial intelligence, and especially machine …
With evolving data regulations, machine unlearning (MU) has become an important tool for fostering trust and safety in today's AI models. However, existing MU methods focusing on …
M Chen, W Gao, G Liu, K Peng… - Proceedings of the …, 2023 - openaccess.thecvf.com
The practical needs of the" right to be forgotten" and poisoned data removal call for efficient machine unlearning techniques, which enable machine learning models to unlearn, or to …
The increasing data privacy concerns in recommendation systems have made federated recommendations attract more and more attention. Existing federated recommendation …
Z Liu, J Guo, W Yang, J Fan, KY Lam… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Over the recent years, with the increasing adoption of Federated Learning (FL) algorithms and growing concerns over personal data privacy, Privacy-Preserving Federated Learning …
Over the past decades, the abundance of personal data has led to the rapid development of machine learning models and important advances in artificial intelligence (AI). However …
Federated machine unlearning (FMU) aims to remove the influence of a specified subset of training data upon request from a trained federated learning model. Despite achieving …
J Liu, P Ram, Y Yao, G Liu, Y Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process to remove the influence of specific examples from a given model …