On protecting the data privacy of large language models (llms): A survey

B Yan, K Li, M Xu, Y Dong, Y Zhang, Z Ren… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) are complex artificial intelligence systems capable of
understanding, generating and translating human language. They learn language patterns …

pvcnn: Privacy-preserving and verifiable convolutional neural network testing

J Weng, J Weng, G Tang, A Yang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
We propose a new approach for privacy-preserving and verifiable convolutional neural
network (CNN) testing in a distrustful multi-stakeholder environment. The approach is aimed …

Don't Eject the Impostor: Fast Three-Party Computation With a Known Cheater

A Brüggemann, O Schick, T Schneider… - 2024 IEEE Symposium …, 2024 - computer.org
Secure multi-party computation (MPC) enables collaboration on sensitive data while
maintaining privacy. In real-world scenarios, asymmetric trust assumptions are often most …

Don't Eject the Impostor: Fast Three-Party Computation With a Known Cheater (Full Version)

A Brüggemann, O Schick, T Schneider… - Cryptology ePrint …, 2023 - eprint.iacr.org
Secure multi-party computation (MPC) enables (joint) computations on sensitive data while
maintaining privacy. In real-world scenarios, asymmetric trust assumptions are often most …

From Individual Computation to Allied Optimization: Remodeling Privacy-Preserving Neural Inference with Function Input Tuning

Q Zhang, T Xiang, C Xin, H Wu - 2024 IEEE Symposium on Security …, 2024 - computer.org
Abstract Privacy-preserving Machine Learning as a Service (MLaaS) enables the resource-
limited client to cost-efficiently obtain inference output of a well-trained neural model that is …

FeaShare: Feature Sharing for Computation Correctness in Edge Preprocessing

Z Zhao, H Bin, H Li, N Yu, H Zhu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Edge preprocessing is a critical service type in edge computing. However, untrusted edges
may be malicious to provide incorrect computational results (ie, edge tampering). Although …

VerifyML: Obliviously Checking Model Fairness Resilient to Malicious Model Holder

G Xu, X Han, G Deng, T Zhang, S Xu… - … on Dependable and …, 2023 - ieeexplore.ieee.org
In this paper, we present VerifyML, the first secure inference framework to check the fairness
degree of a given Machine learning (ML) model. VerifyML is generic and is immune to any …

Privacy-Preserving and Verifiable Outsourcing Inference Against Malicious Servers

Y Liu, H Li, M Hao, X Zhang… - GLOBECOM 2023-2023 …, 2023 - ieeexplore.ieee.org
Outsourcing inference enables users to outsource neural network inference tasks to a
service provider (eg, a remote server). This paradigm has brought enormous convenience …

A Generative Framework for Low-Cost Result Validation of Outsourced Machine Learning Tasks

A Kumar, MAG Aguilera, R Tourani, S Misra - arXiv preprint arXiv …, 2023 - arxiv.org
The growing popularity of Machine Learning (ML) has led to its deployment in various
sensitive domains, which has resulted in significant research focused on ML security and …

[PDF][PDF] Fides: A Generative Framework for Result Validation of Outsourced Machine Learning Workloads via TEE

A Kumar, A Miguel - arXiv, 2023 - par.nsf.gov
The growing popularity of Machine Learning (ML) has led to its deployment in various
sensitive domains, which has resulted in significant research focused on ML security and …