Genetic algorithm-based online-partitioning BranchyNet for accelerating edge inference

J Na, H Zhang, J Lian, B Zhang - Sensors, 2023 - mdpi.com
In order to effectively apply BranchyNet, a DNN with multiple early-exit branches, in edge
intelligent applications, one way is to divide and distribute the inference task of a …

ClassyNet: Class-Aware Early Exit Neural Networks for Edge Devices

M Ayyat, T Nadeem, B Krawczyk - IEEE Internet of Things …, 2023 - ieeexplore.ieee.org
Edge-based and IoT devices have seen phenomenal growth in recent years, driven by the
surge in demand for emerging applications that leverage machine learning models, such as …

Multi-exit DNN inference acceleration based on multi-dimensional optimization for edge intelligence

F Dong, H Wang, D Shen, Z Huang… - IEEE Transactions …, 2022 - ieeexplore.ieee.org
Edge intelligence, as a prospective paradigm for accelerating DNN inference, is mostly
implemented by model partitioning which inevitably incurs the large transmission overhead …

The effects of partitioning strategies on energy consumption in distributed cnn inference at the edge

E Tang, X Guo, T Stefanov - arXiv preprint arXiv:2210.08392, 2022 - arxiv.org
Nowadays, many AI applications utilizing resource-constrained edge devices (eg, small
mobile robots, tiny IoT devices, etc.) require Convolutional Neural Network (CNN) inference …

EdgeCI: Distributed Workload Assignment and Model Partitioning for CNN Inference on Edge Clusters

Y Chen, T Luo, W Fang, NN Xiong - ACM Transactions on Internet …, 2024 - dl.acm.org
Deep learning technology has grown significantly in new application scenarios such as
smart cities and driverless vehicles, but its deployment needs to consume a lot of resources …

Efficientnet-elite: Extremely lightweight and efficient cnn models for edge devices by network candidate search

CC Wang, CT Chiu, JY Chang - Journal of Signal Processing Systems, 2023 - Springer
Abstract Embedding Convolutional Neural Network (CNN) into edge devices for inference is
a very challenging task because such lightweight hardware is not born to handle this …

Dispense mode for inference to accelerate branchynet

Z Liang, Y Zhou - 2022 IEEE International Conference on Image …, 2022 - ieeexplore.ieee.org
With the increase of depth and width, Deep Neural Network has got the best results in the
computer vision, but its massive calculation has brought a heavy burden to IOT devices. To …

Optimized CNN Architectures Benchmarking in Hardware-Constrained Edge Devices in IoT Environments

PD Rosero-Montalvo, P Tözün… - IEEE Internet of Things …, 2024 - ieeexplore.ieee.org
Internet of Things (IoT) and edge devices have grown in their application fields due to
machine learning (ML) models and their capacity to classify images into previously known …

Towards enabling dynamic convolution neural network inference for edge intelligence

A Adeyemo, T Sandefur, TA Odetola… - … Symposium on Circuits …, 2022 - ieeexplore.ieee.org
Deep learning applications have achieved great success in numerous real-world
applications. Deep learning models, especially Convolution Neural Networks (CNN) are …

ANNA: Accelerating Neural Network Accelerator through software-hardware co-design for vertical applications in edge systems

C Li, K Zhang, Y Li, J Shang, X Zhang, L Qian - Future Generation …, 2023 - Elsevier
In promising edge systems, AI algorithms and their hardware implementations are often joint
optimized as integrated solutions to solve end-to-end design problems. Joint optimization …