Model compression is an important technique to facilitate efficient embedded and hardware implementations of deep neural networks (DNNs), a number of prior works are dedicated to …
J Lee, HJ Yoo - IEEE Open Journal of the Solid-State Circuits …, 2021 - ieeexplore.ieee.org
Deep Neural Networks (DNNs) have been widely used in various artificial intelligence (AI) applications due to their overwhelming performance. Furthermore, recently, several …
J Lee, J Lee, D Han, J Lee, G Park… - 2019 IEEE International …, 2019 - ieeexplore.ieee.org
Recently, deep neural network (DNN) hardware accelerators have been reported for energy- efficient deep learning (DL) acceleration [1–6]. Most prior DNN inference accelerators are …
R Wu, X Guo, J Du, J Li - Electronics, 2021 - mdpi.com
The breakthrough of deep learning has started a technological revolution in various areas such as object identification, image/video recognition and semantic segmentation. Neural …
Machine learning (ML) models are widely used in many important domains. For efficiently processing these computational-and memory-intensive applications, tensors of these …
Custom accelerators improve the energy efficiency, area efficiency, and performance of deep neural network (DNN) inference. This article presents a scalable DNN accelerator …
Large deep neural network (DNN) models pose the key challenge to energy efficiency due to the significantly higher energy consumption of off-chip DRAM accesses than arithmetic or …
JF Zhang, CE Lee, C Liu, YS Shao… - IEEE Journal of Solid …, 2020 - ieeexplore.ieee.org
Recent developments in deep neural network (DNN) pruning introduces data sparsity to enable deep learning applications to run more efficiently on resourceand energy …
D Han, HJ Yoo - On-Chip Training NPU-Algorithm, Architecture and …, 2023 - Springer
This chapter presents HNPU, which is an energy-efficient DNN training processor by adopting algorithm–hardware co-design. The HNPU supports stochastic dynamic fixed-point …