Structured pruning for deep convolutional neural networks: A survey

Y He, L Xiao - IEEE transactions on pattern analysis and …, 2023 - ieeexplore.ieee.org
The remarkable performance of deep Convolutional neural networks (CNNs) is generally
attributed to their deeper and wider architectures, which can come with significant …

Aim 2020 challenge on efficient super-resolution: Methods and results

K Zhang, M Danelljan, Y Li, R Timofte, J Liu… - Computer Vision–ECCV …, 2020 - Springer
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with
focus on the proposed solutions and results. The challenge task was to super-resolve an …

Depgraph: Towards any structural pruning

G Fang, X Ma, M Song, MB Mi… - Proceedings of the …, 2023 - openaccess.thecvf.com
Structural pruning enables model acceleration by removing structurally-grouped parameters
from neural networks. However, the parameter-grouping patterns vary widely across …

Patch diffusion: Faster and more data-efficient training of diffusion models

Z Wang, Y Jiang, H Zheng, P Wang… - Advances in neural …, 2024 - proceedings.neurips.cc
Diffusion models are powerful, but they require a lot of time and data to train. We propose
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …

Repvgg: Making vgg-style convnets great again

X Ding, X Zhang, N Ma, J Han… - Proceedings of the …, 2021 - openaccess.thecvf.com
We present a simple but powerful architecture of convolutional neural network, which has a
VGG-like inference-time body composed of nothing but a stack of 3x3 convolution and …

Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks

T Hoefler, D Alistarh, T Ben-Nun, N Dryden… - Journal of Machine …, 2021 - jmlr.org
The growing energy and performance costs of deep learning have driven the community to
reduce the size of neural networks by selectively pruning components. Similarly to their …

Revisiting random channel pruning for neural network compression

Y Li, K Adamczewski, W Li, S Gu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of
neural networks. There has been a flurry of algorithms that try to solve this practical problem …

Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks

X Ding, Y Guo, G Ding, J Han - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
Abstract As designing appropriate Convolutional Neural Network (CNN) architecture in the
context of a given application usually involves heavy human works or numerous GPU hours …

Chip: Channel independence-based pruning for compact neural networks

Y Sui, M Yin, Y Xie, H Phan… - Advances in Neural …, 2021 - proceedings.neurips.cc
Filter pruning has been widely used for neural network compression because of its enabled
practical acceleration. To date, most of the existing filter pruning works explore the …

Group fisher pruning for practical network compression

L Liu, S Zhang, Z Kuang, A Zhou… - International …, 2021 - proceedings.mlr.press
Network compression has been widely studied since it is able to reduce the memory and
computation cost during inference. However, previous methods seldom deal with …