Computational complexity evaluation of neural network applications in signal processing

P Freire, S Srivallapanondh, A Napoli… - arXiv preprint arXiv …, 2022 - arxiv.org
In this paper, we provide a systematic approach for assessing and comparing the
computational complexity of neural network layers in digital signal processing. We provide …

A comprehensive survey on model quantization for deep neural networks in image classification

B Rokh, A Azarpeyvand, A Khanteymoori - ACM Transactions on …, 2023 - dl.acm.org
Recent advancements in machine learning achieved by Deep Neural Networks (DNNs)
have been significant. While demonstrating high accuracy, DNNs are associated with a …

A survey on methods and theories of quantized neural networks

Y Guo - arXiv preprint arXiv:1808.04752, 2018 - arxiv.org
Deep neural networks are the state-of-the-art methods for many real-world tasks, such as
computer vision, natural language processing and speech recognition. For all its popularity …

Extremely low bit neural network: Squeeze the last bit out with admm

C Leng, Z Dou, H Li, S Zhu, R Jin - … of the AAAI conference on artificial …, 2018 - ojs.aaai.org
Although deep learning models are highly effective for various learning tasks, their high
computational costs prohibit the deployment to scenarios where either memory or …

Neural networks with few multiplications

Z Lin, M Courbariaux, R Memisevic… - arXiv preprint arXiv …, 2015 - arxiv.org
For most deep learning algorithms training is notoriously time consuming. Since most of the
computation in training neural networks is typically spent on floating point multiplications, we …

FPGA-based real-time epileptic seizure classification using Artificial Neural Network

R Sarić, D Jokić, N Beganović, LG Pokvić… - … Signal Processing and …, 2020 - Elsevier
Epilepsy is a neurological disorder characterised by unusual brain activity widely known as
seizure affecting 4-7% of the world's population. The diagnosis of this disorder is currently …

Training quantized nets: A deeper understanding

H Li, S De, Z Xu, C Studer, H Samet… - Advances in Neural …, 2017 - proceedings.neurips.cc
Currently, deep neural networks are deployed on low-power portable devices by first
training a full-precision model using powerful hardware, and then deriving a corresponding …

Shiftaddnet: A hardware-inspired deep network

H You, X Chen, Y Zhang, C Li, S Li… - Advances in …, 2020 - proceedings.neurips.cc
Multiplication (eg, convolution) is arguably a cornerstone of modern deep neural networks
(DNNs). However, intensive multiplications cause expensive resource costs that challenge …

Improving the accuracy and hardware efficiency of neural networks using approximate multipliers

MS Ansari, V Mrazek, BF Cockburn… - … Transactions on Very …, 2019 - ieeexplore.ieee.org
Improving the accuracy of a neural network (NN) usually requires using larger hardware that
consumes more energy. However, the error tolerance of NNs and their applications allow …

Secure evaluation of quantized neural networks

A Dalskov, D Escudero, M Keller - arXiv preprint arXiv:1910.12435, 2019 - arxiv.org
We investigate two questions in this paper: First, we ask to what extent" MPC friendly"
models are already supported by major Machine Learning frameworks such as TensorFlow …