Machine learning models exhibit two seemingly contradictory phenomena: training data memorization, and various forms of forgetting. In memorization, models overfit specific …
J Niu, P Liu, X Zhu, K Shen, Y Wang, H Chi… - Journal of Information …, 2024 - Elsevier
Membership inference (MI) attacks mainly aim to infer whether a data record was used to train a target model or not. Due to the serious privacy risks, MI attacks have been attracting a …
As industrial applications are increasingly automated by machine learning models, enforcing personal data ownership and intellectual property rights requires tracing training …
Y Miao, Y Yu, X Li, Y Guo, X Liu… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
Member Inference Attack (MIA) is a key measure for evaluating privacy leakage in Machine Learning (ML) models, aiming to distinguish private members from non-members by training …
M Rafiei, M Maheri, HR Rabiee - arXiv preprint arXiv:2406.00249, 2024 - arxiv.org
Meta-learning involves multiple learners, each dedicated to specific tasks, collaborating in a data-constrained setting. In current meta-learning methods, task learners locally learn …
Machine learning models have been shown to leak sensitive information about their training datasets. Models are increasingly deployed on devices, raising concerns that white-box …