Dataset distillation: A comprehensive review

R Yu, S Liu, X Wang - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Recent success of deep learning is largely attributed to the sheer amount of data used for
training deep neural networks. Despite the unprecedented success, the massive data …

Dataset distillation via factorization

S Liu, K Wang, X Yang, J Ye… - Advances in neural …, 2022 - proceedings.neurips.cc
In this paper, we study dataset distillation (DD), from a novel perspective and introduce
a\emph {dataset factorization} approach, termed\emph {HaBa}, which is a plug-and-play …

Slimmable dataset condensation

S Liu, J Ye, R Yu, X Wang - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Dataset distillation, also known as dataset condensation, aims to compress a large dataset
into a compact synthetic one. Existing methods perform dataset condensation by assuming a …

DC-BENCH: Dataset condensation benchmark

J Cui, R Wang, S Si, CJ Hsieh - Advances in Neural …, 2022 - proceedings.neurips.cc
Dataset Condensation is a newly emerging technique aiming at learning a tiny dataset that
captures the rich information encoded in the original dataset. As the size of datasets …

Metagcd: Learning to continually learn in generalized category discovery

Y Wu, Z Chi, Y Wang, S Feng - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In this paper, we consider a real-world scenario where a model that is trained on pre-defined
classes continually encounters unlabeled data that contains both known and novel classes …

Structure-aware protein self-supervised learning

C Chen, J Zhou, F Wang, X Liu, D Dou - Bioinformatics, 2023 - academic.oup.com
Motivation Protein representation learning methods have shown great potential to many
downstream tasks in biological applications. A few recent studies have demonstrated that …

Meta-dmoe: Adapting to domain shift by meta-distillation from mixture-of-experts

T Zhong, Z Chi, L Gu, Y Wang… - Advances in Neural …, 2022 - proceedings.neurips.cc
In this paper, we tackle the problem of domain shift. Most existing methods perform training
on multiple source domains using a single model, and the same trained model is used on all …

ExPT: synthetic pretraining for few-shot experimental design

T Nguyen, S Agrawal, A Grover - Advances in Neural …, 2024 - proceedings.neurips.cc
Experimental design is a fundamental problem in many science and engineering fields. In
this problem, sample efficiency is crucial due to the time, money, and safety costs of real …

Importance-aware co-teaching for offline model-based optimization

Y Yuan, CS Chen, Z Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Offline model-based optimization aims to find a design that maximizes a property of interest
using only an offline dataset, with applications in robot, protein, and molecule design …

Parallel-mentoring for offline model-based optimization

CS Chen, C Beckham, Z Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
We study offline model-based optimization to maximize a black-box objective function with a
static dataset of designs and scores. These designs encompass a variety of domains …