Dataset distillation: A comprehensive review

R Yu, S Liu, X Wang - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Recent success of deep learning is largely attributed to the sheer amount of data used for
training deep neural networks. Despite the unprecedented success, the massive data …

A comprehensive survey of dataset distillation

S Lei, D Tao - IEEE Transactions on Pattern Analysis and …, 2023 - ieeexplore.ieee.org
Deep learning technology has developed unprecedentedly in the last decade and has
become the primary choice in many application domains. This progress is mainly attributed …

Generalizing dataset distillation via deep generative prior

G Cazenavette, T Wang, A Torralba… - Proceedings of the …, 2023 - openaccess.thecvf.com
Dataset Distillation aims to distill an entire dataset's knowledge into a few synthetic images.
The idea is to synthesize a small number of synthetic data points that, when given to a …

Scaling up dataset distillation to imagenet-1k with constant memory

J Cui, R Wang, S Si, CJ Hsieh - International Conference on …, 2023 - proceedings.mlr.press
Dataset Distillation is a newly emerging area that aims to distill large datasets into much
smaller and highly informative synthetic ones to accelerate training and reduce storage …

Minimizing the accumulated trajectory error to improve dataset distillation

J Du, Y Jiang, VYF Tan, JT Zhou… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract Model-based deep learning has achieved astounding successes due in part to the
availability of large-scale real-world data. However, processing such massive amounts of …

Dream: Efficient dataset distillation by representative matching

Y Liu, J Gu, K Wang, Z Zhu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Dataset distillation aims to synthesize small datasets with little information loss from original
large-scale ones for reducing storage and training costs. Recent state-of-the-art methods …

Datadam: Efficient dataset distillation with attention matching

A Sajedi, S Khaki, E Amjadian, LZ Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Researchers have long tried to minimize training costs in deep learning while maintaining
strong generalization across diverse datasets. Emerging research on dataset distillation …

Data distillation: A survey

N Sachdeva, J McAuley - arXiv preprint arXiv:2301.04272, 2023 - arxiv.org
The popularity of deep learning has led to the curation of a vast number of massive and
multifarious datasets. Despite having close-to-human performance on individual tasks …

Data pruning via moving-one-sample-out

H Tan, S Wu, F Du, Y Chen, Z Wang… - Advances in Neural …, 2024 - proceedings.neurips.cc
In this paper, we propose a novel data-pruning approach called moving-one-sample-out
(MoSo), which aims to identify and remove the least informative samples from the training …

On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm

P Sun, B Shi, D Yu, T Lin - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Contemporary machine learning which involves training large neural networks on massive
datasets faces significant computational challenges. Dataset distillation as a recent …