Our future in the Anthropocene biosphere

C Folke, S Polasky, J Rockström, V Galaz, F Westley… - Ambio, 2021 - Springer
The COVID-19 pandemic has exposed an interconnected and tightly coupled globalized
world in rapid change. This article sets the scientific stage for understanding and responding …

Typology of risks of generative text-to-image models

C Bird, E Ungless, A Kasirzadeh - Proceedings of the 2023 AAAI/ACM …, 2023 - dl.acm.org
This paper investigates the direct risks and harms associated with modern text-to-image
generative models, such as DALL-E and Midjourney, through a comprehensive literature …

Towards understanding and mitigating social biases in language models

PP Liang, C Wu, LP Morency… - … on Machine Learning, 2021 - proceedings.mlr.press
As machine learning methods are deployed in real-world settings such as healthcare, legal
systems, and social science, it is crucial to recognize how they shape social biases and …

Artificial intelligence, systemic risks, and sustainability

V Galaz, MA Centeno, PW Callahan, A Causevic… - Technology in …, 2021 - Elsevier
Automated decision making and predictive analytics through artificial intelligence, in
combination with rapid progress in technologies such as sensor technology and robotics are …

Documenting large webtext corpora: A case study on the colossal clean crawled corpus

J Dodge, M Sap, A Marasović, W Agnew… - arXiv preprint arXiv …, 2021 - arxiv.org
Large language models have led to remarkable progress on many NLP tasks, and
researchers are turning to ever-larger text corpora to train them. Some of the largest corpora …

Realtoxicityprompts: Evaluating neural toxic degeneration in language models

S Gehman, S Gururangan, M Sap, Y Choi… - arXiv preprint arXiv …, 2020 - arxiv.org
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise
toxic language which hinders their safe deployment. We investigate the extent to which …

Language (technology) is power: A critical survey of" bias" in nlp

SL Blodgett, S Barocas, H Daumé III… - arXiv preprint arXiv …, 2020 - arxiv.org
We survey 146 papers analyzing" bias" in NLP systems, finding that their motivations are
often vague, inconsistent, and lacking in normative reasoning, despite the fact that …

Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction

R Shelby, S Rismani, K Henne, AJ Moon… - Proceedings of the …, 2023 - dl.acm.org
Understanding the landscape of potential harms from algorithmic systems enables
practitioners to better anticipate consequences of the systems they build. It also supports the …

Algorithmic content moderation: Technical and political challenges in the automation of platform governance

R Gorwa, R Binns, C Katzenbach - Big Data & Society, 2020 - journals.sagepub.com
As government pressure on major technology companies builds, both firms and legislators
are searching for technical solutions to difficult platform governance puzzles such as hate …

Challenges in detoxifying language models

J Welbl, A Glaese, J Uesato, S Dathathri… - arXiv preprint arXiv …, 2021 - arxiv.org
Large language models (LM) generate remarkably fluent text and can be efficiently adapted
across NLP tasks. Measuring and guaranteeing the quality of generated text in terms of …