We study dual-based algorithms for distributed convex optimization problems over networks, where the objective is to minimize a sum Σ i= 1 mfi (z) of functions over in a network. We …
We study distributed composite optimization over networks: agents minimize a sum of smooth (strongly) convex functions–the agents' sum-utility–plus a nonsmooth (extended …
The presence of embedded electronics and communication capabilities as well as sensing and control in smart devices has given rise to the novel concept of cyber-physical networks …
This work concerns the analysis and design of distributed first-order optimization algorithms over time-varying graphs. The goal of such algorithms is to optimize a global function that is …
B Hu, U Syed - Advances in neural information processing …, 2019 - proceedings.neurips.cc
In this paper, we provide a unified analysis of temporal difference learning algorithms with linear function approximators by exploiting their connections to Markov jump linear systems …
We revisit the recent gradient tracking algorithm for distributed consensus optimization from a control theoretic viewpoint. We show that the algorithm can be constructed by solving a …
B Van Scoy, L Lessard - 2023 62nd IEEE Conference on …, 2023 - ieeexplore.ieee.org
We consider the distributed optimization problem for a multi-agent system. Here, multiple agents cooperatively optimize an objective by sharing information through a communication …
We study distributed stochastic gradient (D-SG) method and its accelerated variant (D-ASG) for solving decentralized strongly convex stochastic optimization problems where the …
We consider the distributed optimization problem in which a network of agents aims to minimize the average of local functions. To solve this problem, several algorithms have …