Federated learning (FL) is a distributed machine learning strategy that generates a global model by learning from multiple decentralized edge clients. FL enables on-device training …
Federated learning (FL) is a machine learning setting where many clients (eg, mobile devices or whole organizations) collaboratively train a model under the orchestration of a …
In recent years, data and computing resources are typically distributed in the devices of end users, various regions or organizations. Because of laws or regulations, the distributed data …
Decentralized stochastic optimization methods have gained a lot of attention recently, mainly because of their cheap per iteration cost, data locality, and their communication-efficiency. In …
We consider decentralized stochastic optimization with the objective function (eg data samples for machine learning tasks) being distributed over n machines that can only …
We study the asynchronous stochastic gradient descent algorithm, for distributed training over $ n $ workers that might be heterogeneous. In this algorithm, workers compute …
We propose a general yet simple theorem describing the convergence of SGD under the arbitrary sampling paradigm. Our theorem describes the convergence of an infinite array of …
H Yu, R Jin, S Yang - International Conference on Machine …, 2019 - proceedings.mlr.press
Recent developments on large-scale distributed machine learning applications, eg, deep neural networks, benefit enormously from the advances in distributed non-convex …
We consider decentralized machine learning over a network where the training data is distributed across $ n $ agents, each of which can compute stochastic model updates on …