H Li, X Yue, Z Wang, Z Chai, W Wang… - Computational …, 2022 - Wiley Online Library
To accelerate the practical applications of artificial intelligence, this paper proposes a high efficient layer‐wise refined pruning method for deep neural networks at the software level …
D Wang, J An, K Xu - arXiv preprint arXiv:1611.02450, 2016 - arxiv.org
Convolutional neural networks (CNNs) have been widely employed in many applications such as image classification, video analysis and speech recognition. Being compute …
In recent years, convolutional neural networks (CNNs) have shown great performance in various fields such as image classification, pattern recognition, and multi-media …
Y Zhang, J Zhang, Q Wang, Z Zhong - arXiv preprint arXiv:2004.10694, 2020 - arxiv.org
Convolution operator is the core of convolutional neural networks (CNNs) and occupies the most computation cost. To make CNNs more efficient, many methods have been proposed …
Z You, K Yan, J Ye, M Ma… - Advances in neural …, 2019 - proceedings.neurips.cc
Filter pruning is one of the most effective ways to accelerate and compress convolutional neural networks (CNNs). In this work, we propose a global filter pruning algorithm called …
We present a full-stack optimization framework for accelerating inference of CNNs (Convolutional Neural Networks) and validate the approach with a field-programmable gate …
H Wang, Q Zhang, Y Wang, H Hu - arXiv preprint arXiv:1709.06994, 2017 - arxiv.org
In this paper, we propose a novel progressive parameter pruning method for Convolutional Neural Network acceleration, named Structured Probabilistic Pruning (SPP), which …
Abstract Convolutional Neural Networks (CNNs) are getting deeper and wider to improve their performance and thus increase their computational complexity. We apply channel …
Y He, X Zhang, J Sun - Proceedings of the IEEE …, 2017 - openaccess.thecvf.com
In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two …