Comparing the efficacy of large language models ChatGPT, BARD, and Bing AI in providing information on rhinoplasty: an observational study

I Seth, B Lim, Y Xie, J Cevik, WM Rozen… - … surgery journal open …, 2023 - academic.oup.com
I Seth, B Lim, Y Xie, J Cevik, WM Rozen, RJ Ross, M Lee
Aesthetic surgery journal open forum, 2023academic.oup.com
Abstract Background Large language models (LLMs) are emerging artificial intelligence (AI)
technologies refining research and healthcare. However, the impact of these models on
presurgical planning and education remains under-explored. Objectives This study aims to
assess 3 prominent LLMs—Google's AI BARD (Mountain View, CA), Bing AI (Microsoft,
Redmond, WA), and ChatGPT-3.5 (Open AI, San Francisco, CA) in providing safe medical
information for rhinoplasty. Methods Six questions regarding rhinoplasty were prompted to …
Background
Large language models (LLMs) are emerging artificial intelligence (AI) technologies refining research and healthcare. However, the impact of these models on presurgical planning and education remains under-explored.
Objectives
This study aims to assess 3 prominent LLMs—Google's AI BARD (Mountain View, CA), Bing AI (Microsoft, Redmond, WA), and ChatGPT-3.5 (Open AI, San Francisco, CA) in providing safe medical information for rhinoplasty.
Methods
Six questions regarding rhinoplasty were prompted to ChatGPT, BARD, and Bing AI. A Likert scale was used to evaluate these responses by a panel of Specialist Plastic and Reconstructive Surgeons with extensive experience in rhinoplasty. To measure reliability, the Flesch Reading Ease Score, the Flesch–Kincaid Grade Level, and the Coleman–Liau Index were used. The modified DISCERN score was chosen as the criterion for assessing suitability and reliability. A t test was performed to calculate the difference between the LLMs, and a double-sided P-value <.05 was considered statistically significant.
Results
In terms of reliability, BARD and ChatGPT demonstrated a significantly (P < .05) greater Flesch Reading Ease Score of 47.47 (±15.32) and 37.68 (±12.96), Flesch–Kincaid Grade Level of 9.7 (±3.12) and 10.15 (±1.84), and a Coleman–Liau Index of 10.83 (±2.14) and 12.17 (±1.17) than Bing AI. In terms of suitability, BARD (46.3 ± 2.8) demonstrated a significantly greater DISCERN score than ChatGPT and Bing AI. In terms of Likert score, ChatGPT and BARD demonstrated similar scores and were greater than Bing AI.
Conclusions
BARD delivered the most succinct and comprehensible information, followed by ChatGPT and Bing AI. Although these models demonstrate potential, challenges regarding their depth and specificity remain. Therefore, future research should aim to augment LLM performance through the integration of specialized databases and expert knowledge, while also refining their algorithms.
Level of Evidence: 5
Oxford University Press
以上显示的是最相近的搜索结果。 查看全部搜索结果