The increasing computational intensity of important new applications poses a challenge for their use in resource-restricted devices. Approximate computing using power-efficient …
Edge training of deep neural networks (DNNs) is a desirable goal for continuous learning; however, it is hindered by the enormous computational power required by training …
S Kim, CJ Norris, JI Oelund… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The IEEE 754 standard for floating-point (FP) arithmetic is widely used for real numbers. Recently, a variant called posit was proposed to improve the precision around 1 and− 1 …
A Towhidy, R Omidi, K Mohammadi - AEU-International Journal of …, 2021 - Elsevier
In this paper, we propose energy-efficient unsigned approximate multipliers using the Pseudo-Booth (PB) encoding that are suitable for large dynamic-range operands thanks to …
X Wu, Z Wei, SB Ko, H Zhang - 2023 5th International …, 2023 - ieeexplore.ieee.org
In this paper, an efficient logarithm approximate multiplier architecture is proposed. The proposed architecture is designed based on the Mitchell approximate multiplier. Several …
J Ge, C Yan, X Zhao, K Chen, B Wu… - 2022 IEEE Asia Pacific …, 2022 - ieeexplore.ieee.org
Approximate computing has been introduced to reduce the circuit area and power consumption in error-tolerant applications. The wireless communication system can …
C Yan, K Chen, W Liu - Design and Applications of Emerging Computer …, 2024 - Springer
In recent years, the communication circuits and systems have become more complicated and more power hungry. However, due to the channel noise and forward error correction …
Floating-point (FP) arithmetic computation is favored for training neural networks (NNs) due to its wide numerical range. The computation-intensive training process requires a …