Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation

TW Chen, W Tao, D Zhao, K Mima… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Binary-weight Neural Networks (BNNs) in which weights are binarized and
activations are quantized are employed to reduce computational costs of various kinds of …

A 16 nJ/classification FPGA-based wired-logic DNN accelerator using fixed-weight non-linear neural net

A Kosuge, M Hamada, T Kuroda - IEEE Journal on Emerging …, 2021 - ieeexplore.ieee.org
A reconfigurable field-programmable gate array (FPGA)-based wired-logic deep neural
network (DNN) accelerator is presented. High energy efficiency of 16 nJ/classification …

Reconfigurable multivalued memristor FPGA model for digital recognition

Z Zhang, A Xu, HT Ren, G Liu… - International Journal of …, 2022 - Wiley Online Library
Compared with the traditional memristor, it's more significant to research the multivalued
memristor in improving the stability and reliability of memristive neural networks. Developing …

A Post-Quantum Encryption Mechanism Based on Convolutional Neural Network Accelerator

Y Huang, G Fan, J Mai, W Jiang, J Hu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
For most of the edge-based system-on-chips (SoCs), the inference (eg a CNN accelerator)
and the security subsystems are typically separately designed and interacted with each …

Convolutional Neural Networks Inference Accelerator Design using Selective Convolutional Layer

TH Huang, IC Wey, E Goh… - 2023 IEEE 16th …, 2023 - ieeexplore.ieee.org
Convolutional Neural Networks (CNNs) often require a huge amount of multiplication. The
current approach of multiplication reduction requires data preprocessing, which is power …