H Gong, L Jiang, X Liu, Y Wang, L Wang, K Zhang - Sensors, 2022 - mdpi.com
Exchanging gradient is a widely used method in modern multinode machine learning system (eg, distributed training, Federated Learning). Gradients and weights of model has …
H Gong, L Jiang, X Liu, Y Wang, O Gastro… - Artificial Intelligence …, 2023 - Springer
Federated Learning (FL) improves the privacy of local training data by exchanging model updates (eg, local gradients or updated parameters). Gradients and weights of the model …
Z Li, L Wang, G Chen, M Shafq - Authorea Preprints, 2023 - techrxiv.org
In order to preserve data privacy while fully utilizing data from different owners, federated learning is believed to be a promising approach in recent years. However, aiming at …
Z Wang, C Peng, X He, W Tan - Entropy, 2023 - mdpi.com
Federated learning protects the privacy information in the data set by sharing the average gradient. However,“Deep Leakage from Gradient”(DLG) algorithm as a gradient-based …
H Yang, D Xue, M Ge, J Li, G Xu, H Li… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Federated learning (FL) is a distributed machine learning technique that guarantees the privacy of user data. However, FL has been shown to be vulnerable to gradient leakage …
S Liu, Z Wang, Q Lei - arXiv preprint arXiv:2402.09478, 2024 - arxiv.org
Reconstruction attacks and defenses are essential in understanding the data leakage problem in machine learning. However, prior work has centered around empirical …
D Zhang, X Chen, J Shi - International Symposium on Emerging …, 2022 - Springer
Federated learning can complete the neural network model training without uploading users' private data. However, the deep leakage from gradients (DLG) and the compensatory …
In the federated learning scenario, the private data are kept local, and gradients are shared to train the global model. Because gradients are updated according to the private training …
J Wang, S Guo, X Xie, H Qi - IEEE INFOCOM 2022-IEEE …, 2022 - ieeexplore.ieee.org
Federated Learning (FL) is susceptible to gradient leakage attacks, as recent studies show the feasibility of obtaining private training data on clients from publicly shared gradients …