Protecting regression models with personalized local differential privacy

X Li, H Yan, Z Cheng, W Sun… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
IEEE Transactions on Dependable and Secure Computing, 2022ieeexplore.ieee.org
The equation-solving model extraction attack is an intuitively simple but devastating attack to
steal confidential information of regression models through a sufficient number of queries.
Complete mitigation is difficult. Thus, the development of countermeasures is focused on
degrading the attack effectiveness as much as possible without losing the model utilities. We
investigate a novel personalized local differential privacy mechanism to defend against the
attack. We obfuscate the model by adding high-dimensional Gaussian noise on model …
The equation-solving model extraction attack is an intuitively simple but devastating attack to steal confidential information of regression models through a sufficient number of queries. Complete mitigation is difficult. Thus, the development of countermeasures is focused on degrading the attack effectiveness as much as possible without losing the model utilities. We investigate a novel personalized local differential privacy mechanism to defend against the attack. We obfuscate the model by adding high-dimensional Gaussian noise on model coefficients. Our solution can adaptively produce the noise to protect the model on the fly. We thoroughly evaluate the performance of our mechanisms using real-world datasets. The experiment shows that the proposed scheme outperforms the existing differential-privacy-enabled solution, i.e., 4 times more queries are required to achieve the same attack result. We also plan to publish the relevant codes to the community for further research.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果