Multikernel clustering via non-negative matrix factorization tailored graph tensor over distributed networks

Z Ren, M Mukherjee, M Bennis… - IEEE Journal on Selected …, 2020 - ieeexplore.ieee.org
Next-generation wireless networks are witnessing an increasing number of clustering
applications, and produce a large amount of non-linear and unlabeled data. In some …

Communication efficient decentralized learning over bipartite graphs

CB Issaid, A Elgabli, J Park, M Bennis… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
In this paper, we propose a communication-efficiently decentralized machine learning
framework that solves a consensus optimization problem defined over a network of inter …

Communication-efficient federated learning: A second order newton-type method with analog over-the-air aggregation

M Krouka, A Elgabli, CB Issaid… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Owing to their fast convergence, second-order Newton-type learning methods have recently
received attention in the federated learning (FL) setting. However, current solutions are …

Distributed learning based on 1-bit gradient coding in the presence of stragglers

C Li, M Skoglund - IEEE Transactions on Communications, 2024 - ieeexplore.ieee.org
This paper considers the problem of distributed learning (DL) in the presence of stragglers.
For this problem, DL methods based on gradient coding have been widely investigated …

Communication efficient distributed learning with censored, quantized, and generalized group ADMM

CB Issaid, A Elgabli, J Park, M Bennis… - arXiv preprint arXiv …, 2020 - arxiv.org
In this paper, we propose a communication-efficiently decentralized machine learning
framework that solves a consensus optimization problem defined over a network of inter …

Joint model pruning and topology construction for accelerating decentralized machine learning

Z Jiang, Y Xu, H Xu, L Wang, C Qiao… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Recently, mobile and embedded devices worldwide generate a massive amount of data at
the network edge. To efficiently exploit the data from distributed devices, we concentrate on …

L-FGADMM: Layer-wise federated group ADMM for communication efficient decentralized deep learning

A Elgabli, J Park, S Ahmed… - 2020 IEEE Wireless …, 2020 - ieeexplore.ieee.org
This article proposes a communication-efficient decentralized deep learning algorithm,
coined layer-wise federated group ADMM (L-FGADMM). To minimize an empirical risk …

Improved Communication Efficiency in Federated Natural Policy Gradient via ADMM-based Gradient Updates

G Lan, H Wang, J Anderson, C Brinton… - arXiv preprint arXiv …, 2023 - arxiv.org
Federated reinforcement learning (FedRL) enables agents to collaboratively train a global
policy without sharing their individual data. However, high communication overhead …

ADMM-Tracking Gradient for Distributed Optimization over Asynchronous and Unreliable Networks

G Carnevale, N Bastianello, G Notarstefano… - arXiv preprint arXiv …, 2023 - arxiv.org
In this paper, we propose (i) a novel distributed algorithm for consensus optimization over
networks and (ii) a robust extension tailored to deal with asynchronous agents and packet …

Decentralized ADMM with compressed and event-triggered communication

Z Zhang, S Yang, W Xu - Neural Networks, 2023 - Elsevier
This paper considers the decentralized optimization problem, where agents in a network
cooperate to minimize the sum of their local objective functions by communication and local …