Domain specialization as the key to make large language models disruptive: A comprehensive survey

C Ling, X Zhao, J Lu, C Deng, C Zheng, J Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have significantly advanced the field of natural language
processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of …

Parameter-efficient fine-tuning of large-scale pre-trained language models

N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su… - Nature Machine …, 2023 - nature.com
With the prevalence of pre-trained language models (PLMs) and the pre-training–fine-tuning
paradigm, it has been continuously shown that larger models tend to yield better …

ROBBIE: Robust bias evaluation of large generative language models

D Esiobu, X Tan, S Hosseini, M Ung… - Proceedings of the …, 2023 - aclanthology.org
As generative large language models (LLMs) grow more performant and prevalent, we must
develop comprehensive enough tools to measure and improve their fairness. Different …

Trustgpt: A benchmark for trustworthy and responsible large language models

Y Huang, Q Zhang, L Sun - arXiv preprint arXiv:2306.11507, 2023 - arxiv.org
Large Language Models (LLMs) such as ChatGPT, have gained significant attention due to
their impressive natural language processing capabilities. It is crucial to prioritize human …

Survey of social bias in vision-language models

N Lee, Y Bang, H Lovenia, S Cahyawijaya… - arXiv preprint arXiv …, 2023 - arxiv.org
In recent years, the rapid advancement of machine learning (ML) models, particularly
transformer-based pre-trained models, has revolutionized Natural Language Processing …

On the challenges of using black-box apis for toxicity evaluation in research

L Pozzobon, B Ermis, P Lewis, S Hooker - arXiv preprint arXiv:2304.12397, 2023 - arxiv.org
Perception of toxicity evolves over time and often differs between geographies and cultural
backgrounds. Similarly, black-box commercially available APIs for detecting toxicity, such as …

Goodtriever: Adaptive toxicity mitigation with retrieval-augmented models

L Pozzobon, B Ermis, P Lewis, S Hooker - arXiv preprint arXiv:2310.07589, 2023 - arxiv.org
Considerable effort has been dedicated to mitigating toxicity, but existing methods often
require drastic modifications to model parameters or the use of computationally intensive …

Bias and fairness in large language models: A survey

IO Gallegos, RA Rossi, J Barrow, MM Tanjim… - Computational …, 2024 - direct.mit.edu
Rapid advancements of large language models (LLMs) have enabled the processing,
understanding, and generation of human-like text, with increasing integration into systems …

Mauve scores for generative models: Theory and practice

K Pillutla, L Liu, J Thickstun, S Welleck… - Journal of Machine …, 2023 - jmlr.org
Generative artificial intelligence has made significant strides, producing text
indistinguishable from human prose and remarkably photorealistic images. Automatically …

ToViLaG: Your visual-language generative model is also an evildoer

X Wang, X Yi, H Jiang, S Zhou, Z Wei, X Xie - arXiv preprint arXiv …, 2023 - arxiv.org
Warning: this paper includes model outputs showing offensive content. Recent large-scale
Visual-Language Generative Models (VLGMs) have achieved unprecedented improvement …