On the trustworthiness landscape of state-of-the-art generative models: A comprehensive survey

M Fan, C Chen, C Wang, J Huang - arXiv preprint arXiv:2307.16680, 2023 - arxiv.org
Diffusion models and large language models have emerged as leading-edge generative
models and have sparked a revolutionary impact on various aspects of human life. However …

An overview on generative AI at scale with Edge-Cloud Computing

YC Wang, J Xue, C Wei… - IEEE Open Journal of the …, 2023 - ieeexplore.ieee.org
As a specific category of artificial intelligence (AI), generative artificial intelligence (GenAI)
generates new content that resembles what humans create. The rapid development of …

Large language model alignment: A survey

T Shen, R Jin, Y Huang, C Liu, W Dong, Z Guo… - arXiv preprint arXiv …, 2023 - arxiv.org
Recent years have witnessed remarkable progress made in large language models (LLMs).
Such advancements, while garnering significant attention, have concurrently elicited various …

Recovering private text in federated learning of language models

S Gupta, Y Huang, Z Zhong, T Gao… - Advances in neural …, 2022 - proceedings.neurips.cc
Federated learning allows distributed users to collaboratively train a model while keeping
each user's data private. Recently, a growing body of work has demonstrated that an …

Fedlegal: The first real-world federated learning benchmark for legal nlp

Z Zhang, X Hu, J Zhang, Y Zhang… - Proceedings of the …, 2023 - aclanthology.org
The inevitable private information in legal data necessitates legal artificial intelligence to
study privacy-preserving and decentralized learning methods. Federated learning (FL) has …

Privacy implications of retrieval-based language models

Y Huang, S Gupta, Z Zhong, K Li, D Chen - arXiv preprint arXiv …, 2023 - arxiv.org
Retrieval-based language models (LMs) have demonstrated improved interpretability,
factuality, and adaptability compared to their parametric counterparts, by incorporating …

Lamp: Extracting text from gradients with language model priors

M Balunovic, D Dimitrov… - Advances in Neural …, 2022 - proceedings.neurips.cc
Recent work shows that sensitive user data can be reconstructed from gradient updates,
breaking the key privacy promise of federated learning. While success was demonstrated …

Security and privacy challenges of large language models: A survey

BC Das, MH Amini, Y Wu - arXiv preprint arXiv:2402.00888, 2024 - arxiv.org
Large Language Models (LLMs) have demonstrated extraordinary capabilities and
contributed to multiple fields, such as generating and summarizing text, language …

Backdoor activation attack: Attack large language models using activation steering for safety-alignment

H Wang, K Shu - arXiv preprint arXiv:2311.09433, 2023 - arxiv.org
To ensure AI safety, instruction-tuned Large Language Models (LLMs) are specifically
trained to ensure alignment, which refers to making models behave in accordance with …

A survey of what to share in federated learning: Perspectives on model utility, privacy leakage, and communication efficiency

J Shao, Z Li, W Sun, T Zhou, Y Sun, L Liu, Z Lin… - arXiv preprint arXiv …, 2023 - arxiv.org
Federated learning (FL) has emerged as a secure paradigm for collaborative training among
clients. Without data centralization, FL allows clients to share local information in a privacy …