A variety of paradigms have been proposed to speed up Markov chain mixing, ranging from non-backtracking random walks to simulated annealing and lifted Metropolis–Hastings. We …
In order to make meaningful predictions, modern machine learning models require huge amounts of data, and are generally trained in a distributed way, ie, using many computing …
Q Yang, R Zheng, J Guo, T Chen - IEEE Access, 2021 - ieeexplore.ieee.org
Time synchronization is an essential problem for energy-harvesting wireless sensor networks (EH-WSNs), which is closely related to efficient resource schedules, energy …
This thesis considers the problem of average consensus, distributed centralized and decentralized Stochastic Gradient Descent (SGD) and their communication requirements …
XL Feng - … on Computational Intelligence and Security (CIS), 2021 - ieeexplore.ieee.org
In this paper, we consider two kinds of fast convergent consensus protocols of high-order multi-agent systems under general case and introducing outdated agents states case …
X Ren, D Li, Y Xi, H Shao - 2021 40th Chinese Control …, 2021 - ieeexplore.ieee.org
This paper studies distributed optimization over the multi-agent network. We develop and analyze a novel accelerated distributed gradient descent method, termed as G-DGDlm, for …
This paper considers the minimization of a sum of smooth and strongly convex functions dispatched over the nodes of a communication network. Previous works on the subject …