LLM-Powered Conversational Voice Assistants: Interaction Patterns, Opportunities, Challenges, and Design Guidelines

A Mahmood, J Wang, B Yao, D Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Conventional Voice Assistants (VAs) rely on traditional language models to discern user
intent and respond to their queries, leading to interactions that often lack a broader …

State-of-the-art human-computer-interaction in metaverse

Z Lyu - International Journal of Human–Computer Interaction, 2024 - Taylor & Francis
With the increasing popularity of the Metaverse concept, human beings have stepped to a
new height in the intelligent technology progress. This work presents a literature review of …

Using social cues to recognize task failures for hri: A review of current research and future directions

A Bremers, A Pabst, MT Parreira, W Ju - arXiv preprint arXiv:2301.11972, 2023 - arxiv.org
Robots that carry out tasks and interact in complex environments will inevitably commit
errors. Error detection is thus an important ability for robots to master, to work in an efficient …

" I don't know how to help with that"-Learning from Limitations of Modern Conversational Agent Systems in Caregiving Networks

T Zubatiy, N Mathur, L Heck, KL Vickers… - Proceedings of the …, 2023 - dl.acm.org
While commercial conversational agents (CA)(ie Google assistant, Siri, Alexa) are widely
used, these systems have limitations in error-handling, flexibility, personalization and overall …

Creepy assistant: Development and validation of a scale to measure the perceived creepiness of voice assistants

R Phinnemore, M Reza, B Lewis… - Proceedings of the …, 2023 - dl.acm.org
Voice assistants have afforded users rich interaction opportunities to access information and
issue commands in a variety of contexts. However, some users feel uneasy or creeped out …

Can voice assistants be microaggressors? Cross-race psychological responses to failures of automatic speech recognition

K Wenzel, N Devireddy, C Davison… - Proceedings of the 2023 …, 2023 - dl.acm.org
Language technologies have a racial bias, committing greater errors for Black users than for
white users. However, little work has evaluated what effect these disparate error rates have …

A Mixed-Methods Approach to Understanding User Trust after Voice Assistant Failures

A Baughan, X Wang, A Liu, A Mercurio… - Proceedings of the 2023 …, 2023 - dl.acm.org
Despite huge gains in performance in natural language understanding via large language
models in recent years, voice assistants still often fail to meet user expectations. In this study …

To Err is AI: Imperfect Interventions and Repair in a Conversational Agent Facilitating Group Chat Discussions

HJ Do, HK Kong, P Tetali, J Lee, BP Bailey - Proceedings of the ACM on …, 2023 - dl.acm.org
Conversational agents (CAs) can analyze online conversations using natural language
techniques and effectively facilitate group discussions by sending supervisory messages …

“As an AI language model, I cannot”: Investigating LLM Denials of User Requests

J Wester, T Schrills, H Pohl, N van Berkel - Proceedings of the CHI …, 2024 - dl.acm.org
Users ask large language models (LLMs) to help with their homework, for lifestyle advice, or
for support in making challenging decisions. Yet LLMs are often unable to fulfil these …

The Bot on Speaking Terms: The Effects of Conversation Architecture on Perceptions of Conversational Agents

CZ Wei, YH Kim, A Kuzminykh - … of the 5th International Conference on …, 2023 - dl.acm.org
Conversational agents mimic natural conversation to interact with users. Since the
effectiveness of interactions strongly depends on users' perception of agents, it is crucial to …