In this paper, a new communication-efficient federatedlearning (FL) framework is proposed, inspired by vectorquantized compressed sensing. The basic strategy of the proposed …
… In this work, we tackle this challenge using tools from quantization theory. In particular, we … quantization scheme for such setups. We show that combining universal vectorquantization …
GH Lyu, BA Saputra, S Rini, CH Sun… - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
… To quantize the model updates, we use a vectorquantization proposed by Eriksson [14] as our main vectorquantizer. A TCQ is defined by a set of states, a function describing the state …
… federatedlearning model to preserve local differential privacy. First, we provide two quantization … We provide analysis of quantization noise for these methods. Next, we describe our …
Z Liu, H Wang, X Li - IEEE Internet of Things Journal, 2024 - ieeexplore.ieee.org
… that combining universal vectorquantized compressive … vectorquantization on the reconstruction process of compressive sensing. The quantization distortion caused by universal …
… In this context, federated ML plays a major role, ie learning schemes which … learningvector quantization (LVQ) as particularly robust training method, and its extensions to metric learning …
… federatedlearning (FL) framework is proposed, which leverages ideas from vectorquantized … projected local model update is quantized by using a vectorquantizer. The global model …
… enhancement and quantization (JoPEQ), which jointly implements lossy compression and privacy enhancement in FL settings. In particular, JoPEQ utilizes vectorquantization based on …
… We study federatedlearning (FL), which enables mobile devices to utilize their local datasets to collaboratively train a global model with the help of a central server, while keeping data …