J Niu, P Liu, X Zhu, K Shen, Y Wang, H Chi… - Journal of Information …, 2024 - Elsevier
Membership inference (MI) attacks mainly aim to infer whether a data record was used to train a target model or not. Due to the serious privacy risks, MI attacks have been attracting a …
H Zhu, H Zheng, Y Zhu, X Sui - Information Sciences, 2023 - Elsevier
Deep neural networks are highly susceptible to imperceptible noise, even to the human eye. While high attack success rate has been achieved in white-box setting, the attack …
L Wu, Z Liu, B Pu, K Wei, H Cao, S Yao - Information Fusion, 2024 - Elsevier
Federated learning is a privacy-preserving distributed framework that facilitates information fusion and sharing among different clients, enabling the training of a global model without …
C Wu, J Chen, Q Fang, K He, Z Zhao… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Transfer learning, successful in knowledge translation across related tasks, faces a substantial privacy threat from membership inference attacks (MIAs). These attacks, despite …
Z Ding, Y Tian, G Wang, J Xiong - 2024 2nd International …, 2024 - ieeexplore.ieee.org
Neural network models face two highly destructive threats in real-world applications: membership inference attacks (MIAs) and adversarial attacks (AAs). One compromises the …
G Cui, L Ge, Y Zhao, T Fang - International Conference on Applied …, 2023 - Springer
The development of deep learning has brought about the business model of Machine Learning as a Service (MLaaS). Malicious users can infer whether a member has …