Advances in asynchronous parallel and distributed optimization

M Assran, A Aytekin, HR Feyzmahdavian… - Proceedings of the …, 2020 - ieeexplore.ieee.org
Motivated by large-scale optimization problems arising in the context of machine learning,
there have been several advances in the study of asynchronous parallel and distributed …

Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles

Y Lu, X Huang, K Zhang, S Maharjan… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
In Internet of Vehicles (IoV), data sharing among vehicles for collaborative analysis can
improve the driving experience and service quality. However, the bandwidth, security and …

Sharper convergence guarantees for asynchronous SGD for distributed and federated learning

A Koloskova, SU Stich, M Jaggi - Advances in Neural …, 2022 - proceedings.neurips.cc
We study the asynchronous stochastic gradient descent algorithm, for distributed training
over $ n $ workers that might be heterogeneous. In this algorithm, workers compute …

Asynchronous decentralized parallel stochastic gradient descent

X Lian, W Zhang, C Zhang, J Liu - … Conference on Machine …, 2018 - proceedings.mlr.press
Most commonly used distributed machine learning systems are either synchronous or
centralized asynchronous. Synchronous algorithms like AllReduce-SGD perform poorly in a …

The error-feedback framework: SGD with delayed gradients

SU Stich, SP Karimireddy - Journal of Machine Learning Research, 2020 - jmlr.org
We analyze (stochastic) gradient descent (SGD) with delayed updates on smooth quasi-
convex and non-convex functions and derive concise, non-asymptotic, convergence rates …

Asynchronous parallel stochastic gradient for nonconvex optimization

X Lian, Y Huang, Y Li, J Liu - Advances in neural …, 2015 - proceedings.neurips.cc
The asynchronous parallel implementations of stochastic gradient (SG) have been broadly
used in solving deep neural network and received many successes in practice recently …

Decentralized gossip-based stochastic bilevel optimization over communication networks

S Yang, X Zhang, M Wang - Advances in neural information …, 2022 - proceedings.neurips.cc
Bilevel optimization have gained growing interests, with numerous applications found in
meta learning, minimax games, reinforcement learning, and nested composition …

Improving financial trading decisions using deep Q-learning: Predicting the number of shares, action strategies, and transfer learning

G Jeong, HY Kim - Expert Systems with Applications, 2019 - Elsevier
We study trading systems using reinforcement learning with three newly proposed methods
to maximize total profits and reflect real financial market situations while overcoming the …

Fast federated learning in the presence of arbitrary device unavailability

X Gu, K Huang, J Zhang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Federated learning (FL) coordinates with numerous heterogeneous devices to
collaboratively train a shared model while preserving user privacy. Despite its multiple …

The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication

SU Stich, SP Karimireddy - arXiv preprint arXiv:1909.05350, 2019 - arxiv.org
We analyze (stochastic) gradient descent (SGD) with delayed updates on smooth quasi-
convex and non-convex functions and derive concise, non-asymptotic, convergence rates …