Power to the people? Opportunities and challenges for participatory AI

A Birhane, W Isaac, V Prabhakaran, M Diaz… - Proceedings of the 2nd …, 2022 - dl.acm.org
Participatory approaches to artificial intelligence (AI) and machine learning (ML) are gaining
momentum: the increased attention comes partly with the view that participation opens the …

Can HR adapt to the paradoxes of artificial intelligence?

A Charlwood, N Guenole - Human Resource Management …, 2022 - Wiley Online Library
Artificial intelligence (AI) is widely heralded as a new and revolutionary technology that will
transform the world of work. While the impact of AI on human resource (HR) and people …

Holistic evaluation of language models

P Liang, R Bommasani, T Lee, D Tsipras… - arXiv preprint arXiv …, 2022 - arxiv.org
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …

Taxonomy of risks posed by language models

L Weidinger, J Uesato, M Rauh, C Griffin… - Proceedings of the …, 2022 - dl.acm.org
Responsible innovation on large-scale Language Models (LMs) requires foresight into and
in-depth understanding of the risks these models may pose. This paper develops a …

Auditing large language models: a three-layered approach

J Mökander, J Schuett, HR Kirk, L Floridi - AI and Ethics, 2024 - Springer
Large language models (LLMs) represent a major advance in artificial intelligence (AI)
research. However, the widespread use of LLMs is also coupled with significant ethical and …

Improving language models by retrieving from trillions of tokens

S Borgeaud, A Mensch, J Hoffmann… - International …, 2022 - proceedings.mlr.press
We enhance auto-regressive language models by conditioning on document chunks
retrieved from a large corpus, based on local similarity with preceding tokens. With a 2 …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

On the dangers of stochastic parrots: Can language models be too big?🦜

EM Bender, T Gebru, A McMillan-Major… - Proceedings of the 2021 …, 2021 - dl.acm.org
The past 3 years of work in NLP have been characterized by the development and
deployment of ever larger language models, especially for English. BERT, its variants, GPT …

The pile: An 800gb dataset of diverse text for language modeling

L Gao, S Biderman, S Black, L Golding… - arXiv preprint arXiv …, 2020 - arxiv.org
Recent work has demonstrated that increased training dataset diversity improves general
cross-domain knowledge and downstream generalization capability for large-scale …

Realtoxicityprompts: Evaluating neural toxic degeneration in language models

S Gehman, S Gururangan, M Sap, Y Choi… - arXiv preprint arXiv …, 2020 - arxiv.org
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise
toxic language which hinders their safe deployment. We investigate the extent to which …