Misinforming LLMs: vulnerabilities, challenges and opportunities

B Zhou, D Geißler, P Lukowicz - arXiv preprint arXiv:2408.01168, 2024 - arxiv.org
Large Language Models (LLMs) have made significant advances in natural language
processing, but their underlying mechanisms are often misunderstood. Despite exhibiting …

[HTML][HTML] Large models of what? Mistaking engineering achievements for human linguistic agency

A Birhane, M McGann - Language Sciences, 2024 - Elsevier
In this paper we argue that key, often sensational and misleading, claims regarding linguistic
capabilities of Large Language Models (LLMs) are based on at least two unfounded …

[HTML][HTML] The FHJ debate: Will artificial intelligence replace clinical decision making within our lifetimes?

J Hatherley, A Kinderlerer, JC Bjerring, LA Munch… - Future Healthcare …, 2024 - Elsevier
AI systems could replace clinical decision making in two ways. First, from the 'top down', by
way of hospitals and healthcare organisations. Second, from the 'bottom up'. by way of …

LLMs Will Always Hallucinate, and We Need to Live With This

S Banerjee, A Agarwal, S Singla - arXiv preprint arXiv:2409.05746, 2024 - arxiv.org
As Large Language Models become more ubiquitous across domains, it becomes important
to examine their inherent limitations critically. This work argues that hallucinations in …

Evaluating the Performance and Robustness of LLMs in Materials Science Q&A and Property Predictions

H Wang, K Li, S Ramsay, Y Fehlis, E Kim… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have the potential to revolutionize scientific research, yet
their robustness and reliability in domain-specific applications remain insufficiently explored …

'Fighting fire with fire'—using LLMs to combat LLM hallucinations

K Verspoor - 2024 - nature.com
‘Fighting fire with fire’ — using LLMs to combat LLM hallucinations Skip to main content Thank
you for visiting nature.com. You are using a browser version with limited support for CSS. To …

Assurance of AI Systems From a Dependability Perspective

R Bloomfield, J Rushby - arXiv preprint arXiv:2407.13948, 2024 - arxiv.org
We outline the principles of classical assurance for computer-based systems that pose
significant risks. We then consider application of these principles to systems that employ …

Psychomatics—A Multidisciplinary Framework for Understanding Artificial Minds

G Riva, F Mantovani, BK Wiederhold… - … , Behavior, and Social …, 2024 - liebertpub.com
Although large language models (LLMs) and other artificial intelligence systems
demonstrate cognitive skills similar to humans, such as concept learning and language …

[HTML][HTML] Development and Initial Testing of an Artificial Intelligence-Based Virtual Reality Companion for People Living with Dementia in Long-Term Care

L Sheehy, S Bouchard, A Kakkar, R El Hakim… - Journal of Clinical …, 2024 - mdpi.com
Background/Objectives: Feelings of loneliness are common in people living with dementia
(PLWD) in long-term care (LTC). The goals of this study were to describe the development of …

Transforming Agency. On the mode of existence of Large Language Models

XE Barandiaran, LS Almendros - arXiv preprint arXiv:2407.10735, 2024 - arxiv.org
This paper investigates the ontological characterization of Large Language Models (LLMs)
like ChatGPT. Between inflationary and deflationary accounts, we pay special attention to …