Communication-efficient ADMM-based distributed algorithms for sparse training

G Wang, Y Lei, Y Qiu, L Lou, Y Li - Neurocomputing, 2023 - Elsevier
In large-scale distributed machine learning (DML), the synchronization efficiency of the
distributed algorithm becomes a critical factor that affects the training time of machine …

Overlapped Data Processing Scheme for Accelerating Training and Validation in Machine Learning

J Choi, D Kang - IEEE Access, 2022 - ieeexplore.ieee.org
For several years, machine learning (ML) technologies open up new opportunities which
solve traditional problems based on a rich set of hardware resources. Unfortunately, ML …

Communication-efficient local sgd with age-based worker selection

F Zhu, J Zhang, X Wang - The Journal of Supercomputing, 2023 - Springer
A major bottleneck of distributed learning under parameter server (PS) framework is
communication cost due to frequent bidirectional transmissions between the PS and …

A Layer-Based Sparsification Method For Distributed DNN Training

Y Hu, Q Ye, Z Zhang, J Lv - 2022 IEEE 24th Int Conf on High …, 2022 - ieeexplore.ieee.org
With the increasing size of Deep Neural Networks (DNNs) and datasets, DNN training will
consume a lot of time. Various distributed strategies have been utilized to speed up the …

Analysis and Application of Power Information System Log Based on Microservice

W Cao, Q Meng, G Chen, Q Liu - 2022 IEEE 5th Advanced …, 2022 - ieeexplore.ieee.org
The rapid development of artificial intelligence technology and the advantages of efficient
deployment of applications on cloud platforms allow more and more power information …