A comprehensive survey of forgetting in deep learning beyond continual learning

Z Wang, E Yang, L Shen, H Huang - arXiv preprint arXiv:2307.09218, 2023 - arxiv.org
Forgetting refers to the loss or deterioration of previously acquired information or knowledge.
While the existing surveys on forgetting have primarily focused on continual learning …

Unified concept editing in diffusion models

R Gandikota, H Orgad, Y Belinkov… - Proceedings of the …, 2024 - openaccess.thecvf.com
Text-to-image models suffer from various safety issues that may limit their suitability for
deployment. Previous methods have separately addressed individual issues of bias …

Selective amnesia: A continual learning approach to forgetting in deep generative models

A Heng, H Soh - Advances in Neural Information Processing …, 2024 - proceedings.neurips.cc
The recent proliferation of large-scale text-to-image models has led to growing concerns that
such models may be misused to generate harmful, misleading, and inappropriate content …

Open-world machine learning: A review and new outlooks

F Zhu, S Ma, Z Cheng, XY Zhang, Z Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Machine learning has achieved remarkable success in many applications. However,
existing studies are largely based on the closed-world assumption, which assumes that the …

Prompt-free diffusion: Taking" text" out of text-to-image diffusion models

X Xu, J Guo, Z Wang, G Huang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Text-to-image (T2I) research has grown explosively in the past year owing to the
large-scale pre-trained diffusion models and many emerging personalization and editing …

Rethinking machine unlearning for large language models

S Liu, Y Yao, J Jia, S Casper, N Baracaldo… - arXiv preprint arXiv …, 2024 - arxiv.org
We explore machine unlearning (MU) in the domain of large language models (LLMs),
referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence …

Model sparsity can simplify machine unlearning

J Liu, P Ram, Y Yao, G Liu, Y Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
In response to recent data regulation requirements, machine unlearning (MU) has emerged
as a critical process to remove the influence of specific examples from a given model …

Prompt-specific poisoning attacks on text-to-image generative models

S Shan, W Ding, J Passananti, H Zheng… - arXiv preprint arXiv …, 2023 - arxiv.org
Data poisoning attacks manipulate training data to introduce unexpected behaviors into
machine learning models at training time. For text-to-image generative models with massive …

Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation

C Fan, J Liu, Y Zhang, D Wei, E Wong, S Liu - arXiv preprint arXiv …, 2023 - arxiv.org
With evolving data regulations, machine unlearning (MU) has become an important tool for
fostering trust and safety in today's AI models. However, existing MU methods focusing on …

Can sensitive information be deleted from llms? objectives for defending against extraction attacks

V Patil, P Hase, M Bansal - arXiv preprint arXiv:2309.17410, 2023 - arxiv.org
Pretrained language models sometimes possess knowledge that we do not wish them to,
including memorized personal information and knowledge that could be used to harm …