{BatchCrypt}: Efficient homomorphic encryption for {Cross-Silo} federated learning

C Zhang, S Li, J Xia, W Wang, F Yan, Y Liu - 2020 USENIX annual …, 2020 - usenix.org
2020 USENIX annual technical conference (USENIX ATC 20), 2020usenix.org
Cross-silo federated learning (FL) enables organizations (eg, financial, or medical) to
collaboratively train a machine learning model by aggregating local gradient updates from
each client without sharing privacy-sensitive data. To ensure no update is revealed during
aggregation, industrial FL frameworks allow clients to mask local gradient updates using
additively homomorphic encryption (HE). However, this results in significant cost in
computation and communication. In our characterization, HE operations dominate the …
Abstract
Cross-silo federated learning (FL) enables organizations (eg, financial, or medical) to collaboratively train a machine learning model by aggregating local gradient updates from each client without sharing privacy-sensitive data. To ensure no update is revealed during aggregation, industrial FL frameworks allow clients to mask local gradient updates using additively homomorphic encryption (HE). However, this results in significant cost in computation and communication. In our characterization, HE operations dominate the training time, while inflating the data transfer amount by two orders of magnitude. In this paper, we present BatchCrypt, a system solution for cross-silo FL that substantially reduces the encryption and communication overhead caused by HE. Instead of encrypting individual gradients with full precision, we encode a batch of quantized gradients into a long integer and encrypt it in one go. To allow gradient-wise aggregation to be performed on ciphertexts of the encoded batches, we develop new quantization and encoding schemes along with a novel gradient clipping technique. We implemented BatchCrypt as a plug-in module in FATE, an industrial cross-silo FL framework. Evaluations with EC2 clients in geo-distributed datacenters show that BatchCrypt achieves 23X-93X training speedup while reducing the communication overhead by 66X-101X. The accuracy loss due to quantization errors is less than 1%.
usenix.org
以上显示的是最相近的搜索结果。 查看全部搜索结果