Mogan: Morphologic-structure-aware generative learning from a single image

J Chen, Q Xu, Q Kang, MC Zhou - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
In most interactive image generation tasks, given regions of interest (ROI) by users, the
generated results are expected to have adequate diversities in appearance while …

Forget about it: Entity-level working memory models for referring expression generation in robot cognitive architectures

R Sousa Silva, M Lieng, T Williams - … of the annual meeting of the …, 2023 - escholarship.org
Working Memory (WM) plays a key role in natural language understanding and generation.
To enable a human-like breadth and flexibility of language understanding and generation …

Team3 challenge: Tasks for multi-human and multi-robot collaboration with voice and gestures

MJ Munje, LK Teran, B Thymes… - Companion of the 2023 …, 2023 - dl.acm.org
Intuitive human-robot collaboration requires adaptive modalities for humans and robots to
communicate and learn from each other. For diverse teams of humans and robots to …

Treat robots as humans? Perspective choice in human-human and human-robot spatial language interaction

C Xiao, W Wu, J Zhang, L Xu - Spatial Cognition & Computation, 2023 - Taylor & Francis
Spatial language interaction is critical for human-robot interactions. However, previous
findings are inconsistent in people's perspective choices for robots and humans. In two …

Fast Elastic-Net Multi-view Clustering: A Geometric Interpretation Perspective

Y Qin, L Qian - Proceedings of the 32nd ACM International …, 2024 - dl.acm.org
Multi-view clustering methods have been extensively explored in the last decades. This kind
of methods is built on the assumption that the data are sampled from multiple subspaces …

Involuntary Stabilization in Discrete-Event Physical Human–Robot Interaction

H Muramatsu, Y Itaguchi… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Robots are used by humans not only as tools but also to interactively assist and cooperate
with humans, thereby forming physical human–robot interactions. In these interactions, there …

More Than Meets the Eye? An Experimental Design to Test Robot Visual Perspective-Taking Facilitators Beyond Mere-Appearance

J Currie, KL Mcdonough, A Wykowska… - Companion of the 2024 …, 2024 - dl.acm.org
Visual Perspective Taking (VPT) underpins human social interaction, from joint action to
predicting others' future actions and mentalizing about their goals and affective/mental …

Optimizing Personalized Robot Actions with Ranking of Trajectories

H Huang, Y Liu, S Yuan, C Wen, Y Hao… - … Conference on Pattern …, 2024 - Springer
Intelligent robots designed for real-world human interactions need to adapt to the diverse
preferences of individuals. Preference-based Reinforcement Learning (PbRL) offers …

Style-Based Reinforcement Learning: Task Decoupling Personalization for Human-Robot Collaboration

M Bonyani, M Soleymani, C Wang - International Conference on Human …, 2024 - Springer
Intelligent robots that are intended to engage with people in real life must be able to adjust to
the varying tastes of their users. Robots can be taught personalized behaviors through …

Spatial Term Variety Reflected in Eye Movements on Visual Scenes

C Acarturk, SN Ertekin - Proceedings of the Annual Meeting of the …, 2024 - escholarship.org
Verbal descriptions of spatial configurations open a window to a specific aspect of visual
cognition relevant to the interpretation of topological relations in the visual world. The …