Mixed-privacy forgetting in deep networks

A Golatkar, A Achille, A Ravichandran… - Proceedings of the …, 2021 - openaccess.thecvf.com
We show that the influence of a subset of the training samples can be removed--or"
forgotten"--from the weights of a network trained on large-scale image classification tasks …

Eternal sunshine of the spotless net: Selective forgetting in deep networks

A Golatkar, A Achille, S Soatto - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
We explore the problem of selectively forgetting a particular subset of the data used for
training a deep neural network. While the effects of the data to be forgotten can be hidden …

Safe: Machine unlearning with shard graphs

Y Dukler, B Bowman, A Achille… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract We present Synergy Aware Forgetting Ensemble (SAFE), a method to adapt large
models on a diverse collection of data while minimizing the expected cost to remove the …

Less-forgetting learning in deep neural networks

H Jung, J Ju, M Jung, J Kim - arXiv preprint arXiv:1607.00122, 2016 - arxiv.org
A catastrophic forgetting problem makes deep neural networks forget the previously learned
information, when learning data collected in new environments, such as by different sensors …

Packnet: Adding multiple tasks to a single network by iterative pruning

A Mallya, S Lazebnik - … of the IEEE conference on Computer …, 2018 - openaccess.thecvf.com
This paper presents a method for adding multiple tasks to a single deep neural network
while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit …

A comprehensive survey of forgetting in deep learning beyond continual learning

Z Wang, E Yang, L Shen, H Huang - arXiv preprint arXiv:2307.09218, 2023 - arxiv.org
Forgetting refers to the loss or deterioration of previously acquired information or knowledge.
While the existing surveys on forgetting have primarily focused on continual learning …

Overcoming catastrophic forgetting with unlabeled data in the wild

K Lee, K Lee, J Shin, H Lee - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
Lifelong learning with deep neural networks is well-known to suffer from catastrophic
forgetting: the performance on previous tasks drastically degrades when learning a new …

Boundary unlearning: Rapid forgetting of deep networks via shifting the decision boundary

M Chen, W Gao, G Liu, K Peng… - Proceedings of the …, 2023 - openaccess.thecvf.com
The practical needs of the" right to be forgotten" and poisoned data removal call for efficient
machine unlearning techniques, which enable machine learning models to unlearn, or to …

Learning with recoverable forgetting

J Ye, Y Fu, J Song, X Yang, S Liu, X Jin, M Song… - … on Computer Vision, 2022 - Springer
Life-long learning aims at learning a sequence of tasks without forgetting the previously
acquired knowledge. However, the involved training data may not be life-long legitimate due …

Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations

A Golatkar, A Achille, S Soatto - … Conference, Glasgow, UK, August 23–28 …, 2020 - Springer
We describe a procedure for removing dependency on a cohort of training data from a
trained deep network that improves upon and generalizes previous methods to different …