作者
Lea Goetz, Markus Trengove, Artem Trotsyuk, Carole A Federico
发表日期
2023/10/3
期刊
The American Journal of Bioethics
卷号
23
期号
10
页码范围
89-91
出版商
Taylor & Francis
简介
Whilst Rahimzadeh et al.(2023) apply a critical lens to the pedagogical use of LLM bioethics assistants, we outline here further reason for skepticism. Two features of LLM chatbots are of significance: their agreeability and unreliability. First, LLM assistants are agreeable in that they are trained to produce outputs that satisfy the user. Second, as we outline in greater detail below, they are unreliable in that they can produce variable answers with little or no changes to the user’s inputs.
To illustrate the unreliability of LLM assistants, we prompted OpenAI’s GPT-4 model (28 July 2023) using the original prompt from Rahimzadeh et al., both repeatedly, as well as with minimal changes (Rahimzadeh et al. 2023). First, when prompted with the same instruction twice (“Complete an ethics work up on a case of a woman refusing a needed csection”), the LLM produced disparate answers: as is common for LLMs, the outputs …
引用总数
学术搜索中的文章
L Goetz, M Trengove, A Trotsyuk, CA Federico - The American Journal of Bioethics, 2023