Recently, pre-training methods have shown remarkable success in task-oriented dialog (TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …
Pre-training methods with contrastive learning objectives have shown remarkable success in dialog understanding tasks. However, current contrastive learning solely considers the …
The future of conversational agents will provide users with personalized information responses. However, a significant challenge in developing models is the lack of large-scale …
C Liu, R Wang, J Jiang, Y Li, F Huang - arXiv preprint arXiv:2210.15332, 2022 - arxiv.org
In this paper, we introduce the task of learning unsupervised dialogue embeddings. Trivial approaches such as combining pre-trained word or sentence embeddings and encoding …
Recently, pre-training methods have shown remarkable success in task-oriented dialog (TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …
L Pishdad, F Fancellu, R Zhang… - Proceedings of the 28th …, 2020 - aclanthology.org
Despite the recent advances in coherence modelling, most such models including state-of- the-art neural ones, are evaluated on either contrived proxy tasks such as the standard order …
Creating conversational dialog systems that are able to converse naturally and engagingly with humans on any topic remains one of the fundamental challenges of artificial …
Discourse relation identification has been an active area of research for many years, and the challenge of identifying implicit relations remains largely an unsolved task, especially in the …
J Gu, Q Wu, C Wu, W Shi, Z Yu - … of the 59th Annual Meeting of the …, 2021 - aclanthology.org
Large pre-trained language generation models such as GPT-2 have demonstrated their effectiveness as language priors by reaching state-of-the-art results in various language …