Y Li, Z Tan, Y Liu - arXiv preprint arXiv:2305.06212, 2023 - arxiv.org
Prompt tuning provides an efficient way for users to customize Large Language Models (LLMs) with their private data in the emerging LLM service scenario. However, the sensitive …
Large language models pretrained on a huge amount of data capture rich knowledge and information in the training data. The ability of data memorization and regurgitation in …
Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and powerful emergent abilities have achieved remarkable success in various natural language …
S Shahriar, R Dara, R Akalu - Computers & Security, 2025 - Elsevier
The emergence of smartphones and internet accessibility around the globe have enabled billions of people to be connected to the digital world. Due to the popularity of instant …
Large scale adoption of large language models has introduced a new era of convenient knowledge transfer for a slew of natural language processing tasks. However, these models …
K Edemacu, X Wu - arXiv preprint arXiv:2404.06001, 2024 - arxiv.org
Pre-trained language models (PLMs) have demonstrated significant proficiency in solving a wide range of general natural language processing (NLP) tasks. Researchers have …
Recently, more and more pre-trained language models are released as a cloud service. It allows users who lack computing resources to perform inference with a powerful model by …
Z Kan, L Qiao, H Yu, L Peng, Y Gao, D Li - arXiv preprint arXiv:2306.08223, 2023 - arxiv.org
Large Language Models (LLMs) are gaining increasing attention due to their exceptional performance across numerous tasks. As a result, the general public utilize them as an …
Encoded text representations often capture sensitive attributes about individuals (eg, race or gender), which raise privacy concerns and can make downstream models unfair to certain …