Deep leakage from gradients

Y Mu - arXiv preprint arXiv:2301.02621, 2022 - arxiv.org
With the development of artificial intelligence technology, Federated Learning (FL) model
has been widely used in many industries for its high efficiency and confidentiality. Some …

Recover user's private training image data by gradient in federated learning

H Gong, L Jiang, X Liu, Y Wang, L Wang, K Zhang - Sensors, 2022 - mdpi.com
Exchanging gradient is a widely used method in modern multinode machine learning
system (eg, distributed training, Federated Learning). Gradients and weights of model has …

Gradient leakage attacks in federated learning

H Gong, L Jiang, X Liu, Y Wang, O Gastro… - Artificial Intelligence …, 2023 - Springer
Federated Learning (FL) improves the privacy of local training data by exchanging model
updates (eg, local gradients or updated parameters). Gradients and weights of the model …

A survey of image gradient inversion against federated learning

Z Li, L Wang, G Chen, M Shafq - Authorea Preprints, 2023 - techrxiv.org
In order to preserve data privacy while fully utilizing data from different owners, federated
learning is believed to be a promising approach in recent years. However, aiming at …

Wasserstein Distance-Based Deep Leakage from Gradients

Z Wang, C Peng, X He, W Tan - Entropy, 2023 - mdpi.com
Federated learning protects the privacy information in the data set by sharing the average
gradient. However,“Deep Leakage from Gradient”(DLG) algorithm as a gradient-based …

Fast Generation-Based Gradient Leakage Attacks: An Approach to Generate Training Data Directly From the Gradient

H Yang, D Xue, M Ge, J Li, G Xu, H Li… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Federated learning (FL) is a distributed machine learning technique that guarantees the
privacy of user data. However, FL has been shown to be vulnerable to gradient leakage …

Data Reconstruction Attacks and Defenses: A Systematic Evaluation

S Liu, Z Wang, Q Lei - arXiv preprint arXiv:2402.09478, 2024 - arxiv.org
Reconstruction attacks and defenses are essential in understanding the data leakage
problem in machine learning. However, prior work has centered around empirical …

An Efficient Federated Convolutional Neural Network Scheme with Differential Privacy

D Zhang, X Chen, J Shi - International Symposium on Emerging …, 2022 - Springer
Federated learning can complete the neural network model training without uploading users'
private data. However, the deep leakage from gradients (DLG) and the compensatory …

Batch data recovery from gradients based on generative adversarial networks

Y Huang, Y Chen, JF Martínez-Ortega, H Yu… - Neural Computing and …, 2024 - Springer
In the federated learning scenario, the private data are kept local, and gradients are shared
to train the global model. Because gradients are updated according to the private training …

Protect privacy from gradient leakage attack in federated learning

J Wang, S Guo, X Xie, H Qi - IEEE INFOCOM 2022-IEEE …, 2022 - ieeexplore.ieee.org
Federated Learning (FL) is susceptible to gradient leakage attacks, as recent studies show
the feasibility of obtaining private training data on clients from publicly shared gradients …