Overcoming failures of imagination in AI infused system development and deployment

M Boyarskaya, A Olteanu, K Crawford - arXiv preprint arXiv:2011.13416, 2020 - arxiv.org
NeurIPS 2020 requested that research paper submissions include impact statements on"
potential nefarious uses and the consequences of failure." However, as researchers …

Aha!: Facilitating ai impact assessment by generating examples of harms

Z Buçinca, CM Pham, M Jakesch, MT Ribeiro… - arXiv preprint arXiv …, 2023 - arxiv.org
While demands for change and accountability for harmful AI consequences mount,
foreseeing the downstream effects of deploying AI systems remains a challenging task. We …

Disambiguating algorithmic bias: from neutrality to justice

E Edenberg, A Wood - Proceedings of the 2023 AAAI/ACM Conference …, 2023 - dl.acm.org
As algorithms have become ubiquitous in consequential domains, societal concerns about
the potential for discriminatory outcomes have prompted urgent calls to address algorithmic …

Unpacking the expressed consequences of AI research in broader impact statements

P Nanayakkara, J Hullman, N Diakopoulos - Proceedings of the 2021 …, 2021 - dl.acm.org
The computer science research community and the broader public have become
increasingly aware of negative consequences of algorithmic systems. In response, the top …

Concrete problems in AI safety, revisited

ID Raji, R Dobbe - arXiv preprint arXiv:2401.10899, 2023 - arxiv.org
As AI systems proliferate in society, the AI community is increasingly preoccupied with the
concept of AI Safety, namely the prevention of failures due to accidents that arise from an …

Gaps in the Safety Evaluation of Generative AI

M Rauh, N Marchal, A Manzini, LA Hendricks… - Proceedings of the …, 2024 - ojs.aaai.org
Generative AI systems produce a range of ethical and social risks. Evaluation of these risks
is a critical step on the path to ensuring the safety of these systems. However, evaluation …

Sociotechnical safety evaluation of generative ai systems

L Weidinger, M Rauh, N Marchal, A Manzini… - arXiv preprint arXiv …, 2023 - arxiv.org
Generative AI systems produce a range of risks. To ensure the safety of generative AI
systems, these risks must be evaluated. In this paper, we make two main contributions …

The fallacy of AI functionality

ID Raji, IE Kumar, A Horowitz, A Selbst - … of the 2022 ACM Conference on …, 2022 - dl.acm.org
Deployed AI systems often do not work. They can be constructed haphazardly, deployed
indiscriminately, and promoted deceptively. However, despite this reality, scholars, the …

[HTML][HTML] Hard choices in artificial intelligence

R Dobbe, TK Gilbert, Y Mintz - Artificial Intelligence, 2021 - Elsevier
As AI systems are integrated into high stakes social domains, researchers now examine how
to design and operate them in a safe and ethical manner. However, the criteria for identifying …

Interrogating algorithmic Bias: From speculative fiction to Liberatory design

N Gaskins - TechTrends, 2023 - Springer
This paper reviews algorithmic or artificial intelligence (AI) bias in education technology,
especially through the lenses of speculative fiction, speculative and liberatory design. It …