Recent advances in deep learning based dialogue systems: A systematic survey

J Ni, T Young, V Pandelea, F Xue… - Artificial intelligence review, 2023 - Springer
Dialogue systems are a popular natural language processing (NLP) task as it is promising in
real-life applications. It is also a complicated task since many NLP tasks deserving study are …

MTEB: Massive text embedding benchmark

N Muennighoff, N Tazi, L Magne, N Reimers - arXiv preprint arXiv …, 2022 - arxiv.org
Text embeddings are commonly evaluated on a small set of datasets from a single task not
covering their possible applications to other tasks. It is unclear whether state-of-the-art …

[HTML][HTML] Ptr: Prompt tuning with rules for text classification

X Han, W Zhao, N Ding, Z Liu, M Sun - AI Open, 2022 - Elsevier
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …

Multi-task pre-training for plug-and-play task-oriented dialogue system

Y Su, L Shu, E Mansimov, A Gupta, D Cai… - arXiv preprint arXiv …, 2021 - arxiv.org
Pre-trained language models have been recently shown to benefit task-oriented dialogue
(TOD) systems. Despite their success, existing methods often formulate this task as a …

Probing pretrained language models for lexical semantics

I Vulić, EM Ponti, R Litschko, G Glavaš… - Proceedings of the …, 2020 - aclanthology.org
The success of large pretrained language models (LMs) such as BERT and RoBERTa has
sparked interest in probing their representations, in order to unveil what types of knowledge …

InstructDial: Improving zero and few-shot generalization in dialogue through instruction tuning

P Gupta, C Jiao, YT Yeh, S Mehri, M Eskenazi… - arXiv preprint arXiv …, 2022 - arxiv.org
Instruction tuning is an emergent paradigm in NLP wherein natural language instructions
are leveraged with language models to induce zero-shot performance on unseen tasks …

ConveRT: Efficient and accurate conversational representations from transformers

M Henderson, I Casanueva, N Mrkšić, PH Su… - arXiv preprint arXiv …, 2019 - arxiv.org
General-purpose pretrained sentence encoders such as BERT are not ideal for real-world
conversational AI applications; they are computationally heavy, slow, and expensive to train …

Soloist: Building Task Bots at Scale with Transfer Learning and Machine Teaching

B Peng, C Li, J Li, S Shayandeh, L Liden… - Transactions of the …, 2021 - direct.mit.edu
We present a new method, Soloist, that uses transfer learning and machine teaching to build
task bots at scale. We parameterize classical modular task-oriented dialog systems using a …

KNN-contrastive learning for out-of-domain intent classification

Y Zhou, P Liu, X Qiu - Proceedings of the 60th Annual Meeting of …, 2022 - aclanthology.org
Abstract The Out-of-Domain (OOD) intent classification is a basic and challenging task for
dialogue systems. Previous methods commonly restrict the region (in feature space) of In …

Parallel context windows for large language models

N Ratner, Y Levine, Y Belinkov, O Ram, I Magar… - arXiv preprint arXiv …, 2022 - arxiv.org
When applied to processing long text, Large Language Models (LLMs) are limited by their
context window. Existing efforts to address this limitation involve training specialized …