A systematic literature review on binary neural networks

R Sayed, H Azmi, H Shawkey, AH Khalil… - IEEE Access, 2023 - ieeexplore.ieee.org
This paper presents an extensive literature review on Binary Neural Network (BNN). BNN
utilizes binary weights and activation function parameters to substitute the full-precision …

Sparse random neural networks for online anomaly detection on sensor nodes

S Leroux, P Simoens - Future Generation Computer Systems, 2023 - Elsevier
Whether it is used for predictive maintenance, intrusion detection or surveillance, on-device
anomaly detection is a very valuable functionality in sensor and Internet-of-things (IoT) …

Data-free knowledge distillation in neural networks for regression

M Kang, S Kang - Expert Systems with Applications, 2021 - Elsevier
Abstract Knowledge distillation has been used successfully to compress a large neural
network (teacher) into a smaller neural network (student) by transferring the knowledge of …

Compressing convolutional neural networks with cheap convolutions and online distillation

J Xie, S Lin, Y Zhang, L Luo - Displays, 2023 - Elsevier
Visual impairment assistance systems play a vital role in improving the standard of living for
visually impaired people (VIP). With the development of deep learning technologies and …

CSA-Net: An Adaptive Binary Neural Network and Application on Remote Sensing Image Classification

W Gao, M Tan, H Li, J Xie, X Gao… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
When deep neural networks are used to process remote sensing data, complex network
structures, and parameters limit their real-time processing deployment in satellites. Binary …

Automatic pruning for quantized neural networks

L Guerra, T Drummond - 2021 Digital Image Computing …, 2021 - ieeexplore.ieee.org
Neural network quantization and pruning are two techniques commonly used to reduce the
computational complexity and memory footprint of these models for deployment. However …

Deep transferring quantization

Z Xie, Z Wen, J Liu, Z Liu, X Wu, M Tan - Computer Vision–ECCV 2020 …, 2020 - Springer
Network quantization is an effective method for network compression. Existing methods train
a low-precision network by fine-tuning from a pre-trained model. However, training a low …

Computational optimization of image-based reinforcement learning for robotics

S Ferraro, T Van de Maele, P Mazzaglia, T Verbelen… - Sensors, 2022 - mdpi.com
The robotics field has been deeply influenced by the advent of deep learning. In recent
years, this trend has been characterized by the adoption of large, pretrained models for …

Fq-conv: Fully quantized convolution for efficient and accurate inference

BE Verhoef, N Laubeuf, S Cosemans… - arXiv preprint arXiv …, 2019 - arxiv.org
Deep neural networks (DNNs) can be made hardware-efficient by reducing the numerical
precision of the weights and activations of the network and by improving the network's …

Automated training of location-specific edge models for traffic counting

S Leroux, B Li, P Simoens - Computers and Electrical Engineering, 2022 - Elsevier
Deep neural networks are the state of the art for various machine learning problems dealing
with large amounts of rich sensor data. It is often desirable to evaluate these models on …