I know what you trained last summer: A survey on stealing machine learning models and defences

D Oliynyk, R Mayer, A Rauber - ACM Computing Surveys, 2023 - dl.acm.org
Machine-Learning-as-a-Service (MLaaS) has become a widespread paradigm, making
even the most complex Machine Learning models available for clients via, eg, a pay-per …

The false promise of imitating proprietary llms

A Gudibande, E Wallace, C Snell, X Geng, H Liu… - arXiv preprint arXiv …, 2023 - arxiv.org
An emerging method to cheaply improve a weaker language model is to finetune it on
outputs from a stronger model, such as a proprietary system like ChatGPT (eg, Alpaca, Self …

Trustworthy LLMs: A survey and guideline for evaluating large language models' alignment

Y Liu, Y Yao, JF Ton, X Zhang, RGH Cheng… - arXiv preprint arXiv …, 2023 - arxiv.org
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …

A survey on privacy for B5G/6G: New privacy challenges, and research directions

C Sandeepa, B Siniarski, N Kourtellis, S Wang… - Journal of Industrial …, 2022 - Elsevier
Massive developments in mobile wireless telecommunication networks have been made
during the last few decades. At present, mobile users are getting familiar with the latest 5G …

Copy, right? a testing framework for copyright protection of deep learning models

J Chen, J Wang, T Peng, Y Sun… - … IEEE symposium on …, 2022 - ieeexplore.ieee.org
Deep learning models, especially those large-scale and high-performance ones, can be
very costly to train, demanding a considerable amount of data and computational resources …

Proof-of-learning: Definitions and practice

H Jia, M Yaghini, CA Choquette-Choo… - … IEEE Symposium on …, 2021 - ieeexplore.ieee.org
Training machine learning (ML) models typically involves expensive iterative optimization.
Once the model's final parameters are released, there is currently no mechanism for the …

Defending against model stealing via verifying embedded external features

Y Li, L Zhu, X Jia, Y Jiang, ST Xia, X Cao - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Obtaining a well-trained model involves expensive data collection and training procedures,
therefore the model is a valuable intellectual property. Recent studies revealed that …

Promptcare: Prompt copyright protection by watermark injection and verification

H Yao, J Lou, Z Qin, K Ren - 2024 IEEE Symposium on Security …, 2024 - ieeexplore.ieee.org
Large language models (LLMs) have witnessed a meteoric rise in popularity among the
general public users over the past few months, facilitating diverse downstream tasks with …

FedIPR: Ownership verification for federated deep neural network models

B Li, L Fan, H Gu, J Li, Q Yang - IEEE Transactions on Pattern …, 2022 - ieeexplore.ieee.org
Federated learning models are collaboratively developed upon valuable training data
owned by multiple parties. During the development and deployment of federated models …

Dataset inference for self-supervised models

A Dziedzic, H Duan, MA Kaleem… - Advances in …, 2022 - proceedings.neurips.cc
Self-supervised models are increasingly prevalent in machine learning (ML) since they
reduce the need for expensively labeled data. Because of their versatility in downstream …