A privacy-preserving and non-interactive federated learning scheme for regression training with gradient descent

F Wang, H Zhu, R Lu, Y Zheng, H Li - Information Sciences, 2021 - Elsevier
Information Sciences, 2021Elsevier
In recent years, the extensive application of machine learning technologies has been
witnessed in various fields. However, in many applications, massive data are distributively
stored in multiple data owners. Meanwhile, due to the privacy concerns and communication
constraints, it is difficult to bridge the data silos among data owners for training a global
machine learning model. In this paper, we propose a privacy-preserving and non-interacti ve
federated learning sche me for regression training with gradient descent, named VANE. With …
Abstract
In recent years, the extensive application of machine learning technologies has been witnessed in various fields. However, in many applications, massive data are distributively stored in multiple data owners. Meanwhile, due to the privacy concerns and communication constraints, it is difficult to bridge the data silos among data owners for training a global machine learning model. In this paper, we propose a privacy-preserving and non-interactive federated learning scheme for regression training with gradient descent, named VANE. With VANE, multiple data owners are able to train a global linear, ridge or logistic regression model with the assistance of cloud, while their private local training data can be well protected. Specifically, we first design a secure data aggregation algorithm, with which local training data from multiple data owners can be aggregated and trained to a global model without disclosing any private information. Meanwhile, benefit from our data pre-processing method, the whole training process is non-interactive, i.e., there is no interaction between data owners and the cloud. Detailed security analysis shows that VANE can well protect the local training data of data owners. The performance evaluation results demonstrate that the training performance of our VANE is around 103 times faster than existing schemes.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果