Galaxy: A generative pre-trained model for task-oriented dialog with semi-supervised learning and explicit policy injection

W He, Y Dai, Y Zheng, Y Wu, Z Cao, D Liu… - Proceedings of the …, 2022 - ojs.aaai.org
Pre-trained models have proved to be powerful in enhancing task-oriented dialog systems.
However, current pre-training methods mainly focus on enhancing dialog understanding …

Unified dialog model pre-training for task-oriented dialog understanding and generation

W He, Y Dai, M Yang, J Sun, F Huang, L Si… - Proceedings of the 45th …, 2022 - dl.acm.org
Recently, pre-training methods have shown remarkable success in task-oriented dialog
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …

Space-2: Tree-structured semi-supervised contrastive pre-training for task-oriented dialog understanding

W He, Y Dai, B Hui, M Yang, Z Cao, J Dong… - arXiv preprint arXiv …, 2022 - arxiv.org
Pre-training methods with contrastive learning objectives have shown remarkable success
in dialog understanding tasks. However, current contrastive learning solely considers the …

Doing personal laps: Llm-augmented dialogue construction for personalized multi-session conversational search

H Joko, S Chatterjee, A Ramsay, AP De Vries… - Proceedings of the 47th …, 2024 - dl.acm.org
The future of conversational agents will provide users with personalized information
responses. However, a significant challenge in developing models is the lack of large-scale …

Dial2vec: Self-guided contrastive learning of unsupervised dialogue embeddings

C Liu, R Wang, J Jiang, Y Li, F Huang - arXiv preprint arXiv:2210.15332, 2022 - arxiv.org
In this paper, we introduce the task of learning unsupervised dialogue embeddings. Trivial
approaches such as combining pre-trained word or sentence embeddings and encoding …

Space-3: Unified dialog model pre-training for task-oriented dialog understanding and generation

W He, Y Dai, M Yang, J Sun, F Huang, L Si… - arXiv preprint arXiv …, 2022 - arxiv.org
Recently, pre-training methods have shown remarkable success in task-oriented dialog
(TOD) systems. However, most existing pre-trained models for TOD focus on either dialog …

How coherent are neural models of coherence?

L Pishdad, F Fancellu, R Zhang… - Proceedings of the 28th …, 2020 - aclanthology.org
Despite the recent advances in coherence modelling, most such models including state-of-
the-art neural ones, are evaluated on either contrived proxy tasks such as the standard order …

[HTML][HTML] Advancing open domain dialog: The fifth alexa prize socialbot grand challenge

M Johnston, C Flagg, A Gottardi, S Sahai, Y Lu, S Sagi… - 2023 - amazon.science
Creating conversational dialog systems that are able to converse naturally and engagingly
with humans on any topic remains one of the fundamental challenges of artificial …

Implicit discourse relation identification for open-domain dialogues

MD Ma, KK Bowden, J Wu, W Cui, M Walker - arXiv preprint arXiv …, 2019 - arxiv.org
Discourse relation identification has been an active area of research for many years, and the
challenge of identifying implicit relations remains largely an unsolved task, especially in the …

PRAL: A tailored pre-training model for task-oriented dialog generation

J Gu, Q Wu, C Wu, W Shi, Z Yu - … of the 59th Annual Meeting of the …, 2021 - aclanthology.org
Large pre-trained language generation models such as GPT-2 have demonstrated their
effectiveness as language priors by reaching state-of-the-art results in various language …