Edge learning for B5G networks with distributed signal processing: Semantic communication, edge computing, and wireless sensing

W Xu, Z Yang, DWK Ng, M Levorato… - IEEE journal of …, 2023 - ieeexplore.ieee.org
To process and transfer large amounts of data in emerging wireless services, it has become
increasingly appealing to exploit distributed data communication and learning. Specifically …

A Damped Newton Method Achieves Global and Local Quadratic Convergence Rate

S Hanzely, D Kamzolov… - Advances in …, 2022 - proceedings.neurips.cc
In this paper, we present the first stepsize schedule for Newton method resulting in fast
global and local convergence guarantees. In particular, we a) prove an $\mathcal O\left …

Flecs: A federated learning second-order framework via compression and sketching

A Agafonov, D Kamzolov, R Tappenden… - arXiv preprint arXiv …, 2022 - arxiv.org
Inspired by the recent work FedNL (Safaryan et al, FedNL: Making Newton-Type Methods
Applicable to Federated Learning), we propose a new communication efficient second-order …

Communication-Efficient Federated Learning: A Variance-Reduced Stochastic Approach With Adaptive Sparsification

B Wang, J Fang, H Li, B Zeng - IEEE Transactions on Signal …, 2023 - ieeexplore.ieee.org
Federated learning (FL) is an emerging distributed machine learning paradigm that aims to
realize model training without gathering the data from data sources to a central processing …

Distributed learning based on 1-bit gradient coding in the presence of stragglers

C Li, M Skoglund - IEEE Transactions on Communications, 2024 - ieeexplore.ieee.org
This paper considers the problem of distributed learning (DL) in the presence of stragglers.
For this problem, DL methods based on gradient coding have been widely investigated …

Communication Efficient ConFederated Learning: An Event-Triggered SAGA Approach

B Wang, J Fang, H Li, YC Eldar - IEEE Transactions on Signal …, 2024 - ieeexplore.ieee.org
Federated learning (FL) is a machine learning paradigm that targets model training without
gathering the local data dispersed over various data sources. Standard FL, which employs a …

Flecs-cgd: A federated learning second-order framework via compression and sketching with compressed gradient differences

A Agafonov, B Erraji, M Takáč - arXiv preprint arXiv:2210.09626, 2022 - arxiv.org
In the recent paper FLECS (Agafonov et al, FLECS: A Federated Learning Second-Order
Framework via Compression and Sketching), the second-order framework FLECS was …

Adaptive Top-K in SGD for Communication-Efficient Distributed Learning in Multi-Robot Collaboration

M Ruan, G Yan, Y Xiao, L Song… - IEEE Journal of Selected …, 2024 - ieeexplore.ieee.org
Distributed stochastic gradient descent (D-SGD) with gradient compression has become a
popular communication-efficient solution for accelerating optimization procedures in …

Adaptive Optimization Algorithms for Machine Learning

S Hanzely - arXiv preprint arXiv:2311.10203, 2023 - arxiv.org
Machine learning assumes a pivotal role in our data-driven world. The increasing scale of
models and datasets necessitates quick and reliable algorithms for model training. This …

Communication-efficient federated learning using censored heavy ball descent

Y Chen, RS Blum, BM Sadler - IEEE Transactions on Signal …, 2022 - ieeexplore.ieee.org
Distributed machine learning enables scalability and computational offloading, but requires
significant levels of communication. Consequently, communication efficiency in distributed …