Multiple players are each given one independent sample, about which they can only provide limited information to a central referee. Each player is allowed to describe its observed …
The large communication cost for exchanging gradients between different nodes significantly limits the scalability of distributed training for large-scale learning models …
We consider parameter estimation in distributed networks, where each sensor in the network observes an independent sample from an underlying distribution and has $ k $ bits to …
A central server needs to perform statistical inference based on samples that are distributed over multiple users who can each send a message of limited length to the center. We study …
We consider distributed parameter estimation using interactive protocols subject to local information constraints such as bandwidth limitations, local differential privacy, and restricted …
P Mayekar, H Tyagi - International Conference on Artificial …, 2020 - proceedings.mlr.press
Abstract We present Rotated Adaptive Tetra-iterated Quantizer (RATQ), afixed-length quantizer for gradients in first order stochasticoptimization. RATQ is easy to implement and …
Multiple users getting one sample each from an unknown distribution seek to enable a central server to conduct statistical inference. However, each player can only provide limited …
We consider the problem of estimating high-dimensional and nonparametric distributions in distributed networks, where each sensor in the network observes an independent sample …
H Wang, H Hsu, M Diaz… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Disparate treatment occurs when a machine learning model produces different decisions for individuals based on a legally protected or sensitive attribute (eg, age, sex). In domains …