Rethinking data distillation: Do not overlook calibration

D Zhu, B Lei, J Zhang, Y Fang, Y Xie… - Proceedings of the …, 2023 - openaccess.thecvf.com
Neural networks trained on distilled data often produce over-confident output and require
correction by calibration methods. Existing calibration methods such as temperature scaling …

Data distillation can be like vodka: Distilling more times for better quality

X Chen, Y Yang, Z Wang, B Mirzasoleiman - arXiv preprint arXiv …, 2023 - arxiv.org
Dataset distillation aims to minimize the time and memory needed for training deep networks
on large datasets, by creating a small set of synthetic images that has a similar …

Calibrating deep neural networks using focal loss

J Mukhoti, V Kulharia, A Sanyal… - Advances in …, 2020 - proceedings.neurips.cc
Miscalibration--a mismatch between a model's confidence and its correctness--of Deep
Neural Networks (DNNs) makes their predictions hard to rely on. Ideally, we want networks …

Regularizing class-wise predictions via self-knowledge distillation

S Yun, J Park, K Lee, J Shin - Proceedings of the IEEE/CVF …, 2020 - openaccess.thecvf.com
Deep neural networks with millions of parameters may suffer from poor generalization due to
overfitting. To mitigate the issue, we propose a new regularization method that penalizes the …

Dataset distillation by matching training trajectories

G Cazenavette, T Wang, A Torralba… - Proceedings of the …, 2022 - openaccess.thecvf.com
Dataset distillation is the task of synthesizing a small dataset such that a model trained on
the synthetic set will match the test accuracy of the model trained on the full dataset. In this …

Learning student networks in the wild

H Chen, T Guo, C Xu, W Li, C Xu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Data-free learning for student networks is a new paradigm for solving users' anxiety caused
by the privacy problem of using original training data. Since the architectures of modern …

Datadam: Efficient dataset distillation with attention matching

A Sajedi, S Khaki, E Amjadian, LZ Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Researchers have long tried to minimize training costs in deep learning while maintaining
strong generalization across diverse datasets. Emerging research on dataset distillation …

Class adaptive network calibration

B Liu, J Rony, A Galdran, J Dolz… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recent studies have revealed that, beyond conventional accuracy, calibration should also
be considered for training modern deep neural networks. To address miscalibration during …

Dream: Efficient dataset distillation by representative matching

Y Liu, J Gu, K Wang, Z Zhu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Dataset distillation aims to synthesize small datasets with little information loss from original
large-scale ones for reducing storage and training costs. Recent state-of-the-art methods …

On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm

P Sun, B Shi, D Yu, T Lin - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Contemporary machine learning which involves training large neural networks on massive
datasets faces significant computational challenges. Dataset distillation as a recent …