S Jie, X Huang, C Jing, X Jiang, L Dong - Scientific Reports, 2024 - nature.com
Due to the hallucination of the underlying large language model (LLMs) or the unclear
description of the task's ultimate goal, the agents have become somewhat confused. Despite …