作者
Kilian Pfeiffer, Konstantinos Balaskas, Kostas Siozios, Jörg Henkel
发表日期
2024/2/28
期刊
arXiv preprint arXiv:2402.18569
简介
In Federated Learning (FL), devices that participate in the training usually have heterogeneous resources, i.e., energy availability. In current deployments of FL, devices that do not fulfill certain hardware requirements are often dropped from the collaborative training. However, dropping devices in FL can degrade training accuracy and introduce bias or unfairness. Several works have tacked this problem on an algorithmic level, e.g., by letting constrained devices train a subset of the server neural network (NN) model. However, it has been observed that these techniques are not effective w.r.t. accuracy. Importantly, they make simplistic assumptions about devices' resources via indirect metrics such as multiply accumulate (MAC) operations or peak memory requirements. In this work, for the first time, we consider on-device accelerator design for FL with heterogeneous devices. We utilize compressed arithmetic formats and approximate computing, targeting to satisfy limited energy budgets. Using a hardware-aware energy model, we observe that, contrary to the state of the art's moderate energy reduction, our technique allows for lowering the energy requirements (by 4x) while maintaining higher accuracy.
学术搜索中的文章
K Pfeiffer, K Balaskas, K Siozios, J Henkel - arXiv preprint arXiv:2402.18569, 2024