Condensing graphs via one-step gradient matching

W Jin, X Tang, H Jiang, Z Li, D Zhang, J Tang… - Proceedings of the 28th …, 2022 - dl.acm.org
As training deep learning models on large dataset takes a lot of time and resources, it is
desired to construct a small synthetic dataset with which we can train deep learning models …

Improved distribution matching for dataset condensation

G Zhao, G Li, Y Qin, Y Yu - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Dataset Condensation aims to condense a large dataset into a smaller one while
maintaining its ability to train a well-performing model, thus reducing the storage cost and …

Delving into effective gradient matching for dataset condensation

Z Jiang, J Gu, M Liu, DZ Pan - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
As deep learning models and datasets rapidly scale up, model training is extremely time-
consuming and resource-costly. Instead of training on the entire dataset, learning with a …

DC-BENCH: Dataset condensation benchmark

J Cui, R Wang, S Si, CJ Hsieh - Advances in Neural …, 2022 - proceedings.neurips.cc
Dataset Condensation is a newly emerging technique aiming at learning a tiny dataset that
captures the rich information encoded in the original dataset. As the size of datasets …

Slimmable dataset condensation

S Liu, J Ye, R Yu, X Wang - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Dataset distillation, also known as dataset condensation, aims to compress a large dataset
into a compact synthetic one. Existing methods perform dataset condensation by assuming a …

Dataset condensation with distribution matching

B Zhao, H Bilen - Proceedings of the IEEE/CVF Winter …, 2023 - openaccess.thecvf.com
Computational cost of training state-of-the-art deep models in many learning problems is
rapidly increasing due to more sophisticated models and larger datasets. A recent promising …

Dataset condensation with gradient matching

B Zhao, KR Mopuri, H Bilen - arXiv preprint arXiv:2006.05929, 2020 - arxiv.org
As the state-of-the-art machine learning methods in many fields rely on larger datasets,
storing datasets and training models on them become significantly more expensive. This …

Squeeze, recover and relabel: Dataset condensation at imagenet scale from a new perspective

Z Yin, E Xing, Z Shen - Advances in Neural Information …, 2024 - proceedings.neurips.cc
We present a new dataset condensation framework termed Squeeze, Recover and Relabel
(SRe $^ 2$ L) that decouples the bilevel optimization of model and synthetic data during …

Dataset condensation via efficient synthetic-data parameterization

JH Kim, J Kim, SJ Oh, S Yun, H Song… - International …, 2022 - proceedings.mlr.press
The great success of machine learning with massive amounts of data comes at a price of
huge computation costs and storage for training and tuning. Recent studies on dataset …

Structure-free graph condensation: From large-scale graphs to condensed graph-free data

X Zheng, M Zhang, C Chen… - Advances in …, 2024 - proceedings.neurips.cc
Graph condensation, which reduces the size of a large-scale graph by synthesizing a small-
scale condensed graph as its substitution, has immediate benefits for various graph learning …