Asteroid: Resource-Efficient Hybrid Pipeline Parallelism for Collaborative DNN Training on Heterogeneous Edge Devices

S Ye, L Zeng, X Chu, G Xing, X Chen - Proceedings of the 30th Annual …, 2024 - dl.acm.org
On-device Deep Neural Network (DNN) training has been recognized as crucial for privacy-
preserving machine learning at the edge. However, the intensive training workload and …

Ftpipehd: A fault-tolerant pipeline-parallel distributed training framework for heterogeneous edge devices

Y Chen, Q Yang, S He, Z Shi, J Chen - arXiv preprint arXiv:2110.02781, 2021 - arxiv.org
With the increased penetration and proliferation of Internet of Things (IoT) devices, there is a
growing trend towards distributing the power of deep learning (DL) across edge devices …

Memory-efficient DNN training on mobile devices

I Gim, JG Ko - Proceedings of the 20th Annual International …, 2022 - dl.acm.org
On-device deep neural network (DNN) training holds the potential to enable a rich set of
privacy-aware and infrastructure-independent personalized mobile applications. However …

AccEPT: An Acceleration Scheme for Speeding Up Edge Pipeline-parallel Training

Y Chen, Y Yan, Q Yang, Y Shu, S He… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
It is usually infeasible to fit and train an entire large deep neural network (DNN) model using
a single edge device due to the limited resources. To facilitate intelligent applications across …

Fully distributed deep learning inference on resource-constrained edge devices

R Stahl, Z Zhao, D Mueller-Gritschneder… - … , and Simulation: 19th …, 2019 - Springer
Performing inference tasks of deep learning applications on IoT edge devices ensures
privacy of input data and can result in shorter latency when compared to a cloud solution. As …

Melon: Breaking the memory wall for resource-efficient on-device machine learning

Q Wang, M Xu, C Jin, X Dong, J Yuan, X Jin… - Proceedings of the 20th …, 2022 - dl.acm.org
On-device learning is a promising technique for emerging privacy-preserving machine
learning paradigms. However, through quantitative experiments, we find that commodity …

Tools and techniques for privacy-aware, edge-centric distributed deep learning

Z Min, RE Canady, U Ghosh, AS Gokhale… - Proceedings of the …, 2020 - dl.acm.org
Training and inferencing phases of Deep Learning (DL) are compute-intensive that require
substantial amount of cloud-hosted resources. However, real-time needs of some edge …

Privacy-aware edge computing based on adaptive DNN partitioning

C Shi, L Chen, C Shen, L Song… - 2019 IEEE Global …, 2019 - ieeexplore.ieee.org
Recent years have witnessed deep neural networks (DNNs) become the de facto tool in
many applications such as image classification and speech recognition. But significant …

PipeEdge: A trusted pipelining collaborative edge training based on blockchain

L Yuan, Q He, F Chen, R Dou, H Jin… - Proceedings of the ACM …, 2023 - dl.acm.org
Powered by the massive data generated by the blossom of mobile and Web-of-Things (WoT)
devices, Deep Neural Networks (DNNs) have developed both in accuracy and size in recent …

MistNet: A superior edge-cloud privacy-preserving training framework with one-shot communication

W Guo, J Cui, X Li, L Qu, H Li, A Hu, T Cai - Internet of Things, 2023 - Elsevier
Classical federated learning methods aggregate decentralized data from different devices
into a central location for efficient training. However, these approaches raise significant …