Gradient leakage attack resilient deep learning

W Wei, L Liu - IEEE Transactions on Information Forensics and …, 2021 - ieeexplore.ieee.org
Gradient leakage attacks are considered one of the wickedest privacy threats in deep
learning as attackers covertly spy gradient updates during iterative training without …

Membership inference attacks via spatial projection-based relative information loss in MLaaS

Z Ding, Y Tian, G Wang, J Xiong, J Tang… - Information Processing & …, 2025 - Elsevier
Abstract Machine Learning as a Service (MLaaS) has significantly advanced data-driven
decision-making and the development of intelligent applications. However, the privacy risks …

Rethinking Membership Inference Attacks Against Transfer Learning

C Wu, J Chen, Q Fang, K He, Z Zhao… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Transfer learning, successful in knowledge translation across related tasks, faces a
substantial privacy threat from membership inference attacks (MIAs). These attacks, despite …

SAMFL: Secure Aggregation Mechanism for Federated Learning with Byzantine-robustness by functional encryption

M Guan, H Bao, Z Li, H Pan, C Huang… - Journal of Systems …, 2024 - Elsevier
Federated learning (FL) enables collaborative model training without sharing private data,
thereby potentially meeting the growing demand for data privacy protection. Despite its …

A realistic model extraction attack against graph neural networks

F Guan, T Zhu, H Tong, W Zhou - Knowledge-Based Systems, 2024 - Elsevier
Abstract Model extraction attacks are considered to be a significant avenue of vulnerability in
machine learning. In model extraction attacks, the attacker repeatedly queries a victim model …

Membership Feature Aggregation Attack Against Knowledge Reasoning Models in Internet of Things

Z Ding, Y Tian, J Xiong, G Wang… - IEEE Internet of Things …, 2024 - ieeexplore.ieee.org
The rapid growth of IoT technology has heightened the requirement for effective data
management and analysis. Knowledge graphs (KGs) and large pre-trained language …

[PDF][PDF] K-Aster: A novel Membership Inference Attack via Prediction Sensitivity

R Li, X Zhao, D Li, Y Tan - poster-openaccess.com
Membership Inference Attacks (MIA) are considered the fundamental privacy risk in Machine
Learning (ML), which attempt to determine whether a specific data sample is training data for …