Domain-Specific Improvement on Psychotherapy Chatbot Using Assistant

C Kang, D Novak, K Urbanova, Y Cheng… - arXiv preprint arXiv …, 2024 - arxiv.org
C Kang, D Novak, K Urbanova, Y Cheng, Y Hu
arXiv preprint arXiv:2404.16160, 2024arxiv.org
Large language models (LLMs) have demonstrated impressive generalization capabilities
on specific tasks with human-written instruction data. However, the limited quantity, diversity,
and professional expertise of such instruction data raise concerns about the performance of
LLMs in psychotherapy tasks when provided with domain-specific instructions. To address
this, we firstly propose Domain-Specific Assistant Instructions based on AlexanderStreet
therapy, and secondly, we use an adaption fine-tuning method and retrieval augmented …
Large language models (LLMs) have demonstrated impressive generalization capabilities on specific tasks with human-written instruction data. However, the limited quantity, diversity, and professional expertise of such instruction data raise concerns about the performance of LLMs in psychotherapy tasks when provided with domain-specific instructions. To address this, we firstly propose Domain-Specific Assistant Instructions based on AlexanderStreet therapy, and secondly, we use an adaption fine-tuning method and retrieval augmented generation method to improve pre-trained LLMs. Through quantitative evaluation of linguistic quality using automatic and human evaluation, we observe that pre-trained LLMs on Psychotherapy Assistant Instructions outperform state-of-the-art LLMs response baselines. Our Assistant-Instruction approach offers a half-annotation method to align pre-trained LLMs with instructions and provide pre-trained LLMs with more psychotherapy knowledge.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果