Dataset distillation: A comprehensive review

R Yu, S Liu, X Wang - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Recent success of deep learning is largely attributed to the sheer amount of data used for
training deep neural networks. Despite the unprecedented success, the massive data …

NTIRE 2023 challenge on efficient super-resolution: Methods and results

Y Li, Y Zhang, R Timofte, L Van Gool… - Proceedings of the …, 2023 - openaccess.thecvf.com
This paper reviews the NTIRE 2023 challenge on efficient single-image super-resolution
with a focus on the proposed solutions and results. The aim of this challenge is to devise a …

A simple and effective pruning approach for large language models

M Sun, Z Liu, A Bair, JZ Kolter - arXiv preprint arXiv:2306.11695, 2023 - arxiv.org
As their size increases, Large Languages Models (LLMs) are natural candidates for network
pruning methods: approaches that drop a subset of network weights while striving to …

Patch diffusion: Faster and more data-efficient training of diffusion models

Z Wang, Y Jiang, H Zheng, P Wang… - Advances in neural …, 2024 - proceedings.neurips.cc
Diffusion models are powerful, but they require a lot of time and data to train. We propose
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …

Effective whole-body pose estimation with two-stages distillation

Z Yang, A Zeng, C Yuan, Y Li - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Whole-body pose estimation localizes the human body, hand, face, and foot keypoints in an
image. This task is challenging due to multi-scale body parts, fine-grained localization for …

Diffusion probabilistic model made slim

X Yang, D Zhou, J Feng… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Despite the visually-pleasing results achieved, the massive computational cost has been a
long-standing flaw for diffusion probabilistic models (DPMs), which, in turn, greatly limits …

A survey on model compression for large language models

X Zhu, J Li, Y Liu, C Ma, W Wang - arXiv preprint arXiv:2308.07633, 2023 - arxiv.org
Large Language Models (LLMs) have revolutionized natural language processing tasks with
remarkable success. However, their formidable size and computational demands present …

Diffusion model as representation learner

X Yang, X Wang - … of the IEEE/CVF International Conference …, 2023 - openaccess.thecvf.com
Abstract Diffusion Probabilistic Models (DPMs) have recently demonstrated impressive
results on various generative tasks. Despite its promises, the learned representations of pre …

Graphadapter: Tuning vision-language models with dual knowledge graph

X Li, D Lian, Z Lu, J Bai, Z Chen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Adapter-style efficient transfer learning (ETL) has shown excellent performance in the tuning
of vision-language models (VLMs) under the low-data regime, where only a few additional …

A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations

H Cheng, M Zhang, JQ Shi - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
Modern deep neural networks, particularly recent large language models, come with
massive model sizes that require significant computational and storage resources. To …