Dataset distillation: A comprehensive review

R Yu, S Liu, X Wang - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Recent success of deep learning is largely attributed to the sheer amount of data used for
training deep neural networks. Despite the unprecedented success, the massive data …

Learn from model beyond fine-tuning: A survey

H Zheng, L Shen, A Tang, Y Luo, H Hu, B Du… - arXiv preprint arXiv …, 2023 - arxiv.org
Foundation models (FM) have demonstrated remarkable performance across a wide range
of tasks (especially in the fields of natural language processing and computer vision) …

Accelerating dataset distillation via model augmentation

L Zhang, J Zhang, B Lei, S Mukherjee… - Proceedings of the …, 2023 - openaccess.thecvf.com
Dataset Distillation (DD), a newly emerging field, aims at generating much smaller but
efficient synthetic training datasets from large ones. Existing DD methods based on gradient …

Data-free knowledge distillation via feature exchange and activation region constraint

S Yu, J Chen, H Han, S Jiang - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Despite the tremendous progress on data-free knowledge distillation (DFKD) based on
synthetic data generation, there are still limitations in diverse and efficient data synthesis. It …

Learning to retain while acquiring: Combating distribution-shift in adversarial data-free knowledge distillation

G Patel, KR Mopuri, Q Qiu - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Abstract Data-free Knowledge Distillation (DFKD) has gained popularity recently, with the
fundamental idea of carrying out knowledge transfer from a Teacher neural network to a …

Target: Federated class-continual learning via exemplar-free distillation

J Zhang, C Chen, W Zhuang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
This paper focuses on an under-explored yet important problem: Federated Class-Continual
Learning (FCCL), where new classes are dynamically added in federated learning. Existing …

Data-free knowledge transfer: A survey

Y Liu, W Zhang, J Wang, J Wang - arXiv preprint arXiv:2112.15278, 2021 - arxiv.org
In the last decade, many deep learning models have been well trained and made a great
success in various fields of machine intelligence, especially for computer vision and natural …

Are large kernels better teachers than transformers for convnets?

T Huang, L Yin, Z Zhang, L Shen… - International …, 2023 - proceedings.mlr.press
This paper reveals a new appeal of the recently emerged large-kernel Convolutional Neural
Networks (ConvNets): as the teacher in Knowledge Distillation (KD) for small-kernel …

Momentum adversarial distillation: Handling large distribution shifts in data-free knowledge distillation

K Do, TH Le, D Nguyen, D Nguyen… - Advances in …, 2022 - proceedings.neurips.cc
Abstract Data-free Knowledge Distillation (DFKD) has attracted attention recently thanks to
its appealing capability of transferring knowledge from a teacher network to a student …

Text-enhanced data-free approach for federated class-incremental learning

MT Tran, T Le, XM Le, M Harandi… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Federated Class-Incremental Learning (FCIL) is an underexplored yet pivotal issue
involving the dynamic addition of new classes in the context of federated learning. In this …