Deep learning based object detection for resource constrained devices: Systematic review, future trends and challenges ahead

V Kamath, A Renuka - Neurocomputing, 2023 - Elsevier
Deep learning models are widely being employed for object detection due to their high
performance. However, the majority of applications that require object detection are …

A systematic literature review on binary neural networks

R Sayed, H Azmi, H Shawkey, AH Khalil… - IEEE Access, 2023 - ieeexplore.ieee.org
This paper presents an extensive literature review on Binary Neural Network (BNN). BNN
utilizes binary weights and activation function parameters to substitute the full-precision …

FLUTE: fast and secure lookup table evaluations

A Brüggemann, R Hundt, T Schneider… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
The concept of using Lookup Tables (LUTs) instead of Boolean circuits is well-known and
been widely applied in a variety of applications, including FPGAs, image processing, and …

Fpga-based deep learning inference accelerators: Where are we standing?

A Nechi, L Groth, S Mulhem, F Merchant… - ACM Transactions on …, 2023 - dl.acm.org
Recently, artificial intelligence applications have become part of almost all emerging
technologies around us. Neural networks, in particular, have shown significant advantages …

Logic synthesis meets machine learning: Trading exactness for generalization

S Rai, WL Neto, Y Miyasaka, X Zhang… - … , Automation & Test …, 2021 - ieeexplore.ieee.org
Logic synthesis is a fundamental step in hardware design whose goal is to find structural
representations of Boolean functions while minimizing delay and area. If the function is …

Optimizing temporal convolutional network inference on FPGA-based accelerators

M Carreras, G Deriu, L Raffo, L Benini… - IEEE Journal on …, 2020 - ieeexplore.ieee.org
Convolutional Neural Networks (CNNs) are extensively used in a wide range of
applications, commonly including computer vision tasks like image and video classification …

Most resource efficient matrix vector multiplication on FPGAs

A Lehnert, P Holzinger, S Pfenning, R Müller… - IEEE …, 2023 - ieeexplore.ieee.org
Fast and resource-efficient inference in artificial neural networks (ANNs) is of utmost
importance and drives many new developments in the area of new hardware architectures …

Enabling binary neural network training on the edge

E Wang, JJ Davis, D Moro, P Zielinski, JJ Lim… - Proceedings of the 5th …, 2021 - dl.acm.org
The ever-growing computational demands of increasingly complex machine learning
models frequently necessitate the use of powerful cloud-based infrastructure for their …

Reconfigurable binary neural network accelerator with adaptive parallelism scheme

J Cho, Y Jung, S Lee, Y Jung - Electronics, 2021 - mdpi.com
Binary neural networks (BNNs) have attracted significant interest for the implementation of
deep neural networks (DNNs) on resource-constrained edge devices, and various BNN …

Accelerating DNNs from local to virtualized FPGA in the Cloud: A survey of trends

C Wu, V Fresse, B Suffran, H Konik - Journal of Systems Architecture, 2021 - Elsevier
Field-programmable gate arrays (FPGAs) are widely used locally to speed up deep neural
network (DNN) algorithms with high computational throughput and energy efficiency …