Dynamic Task-Oriented Dialogue: A Comparative Study of Llama-2 and Bert in Slot Value Generation

T Labruna, S Brenna, B Magnini - … of the 18th Conference of the …, 2024 - aclanthology.org
Recent advancements in instruction-based language models have demonstrated
exceptional performance across various natural language processing tasks. We present a …

[PDF][PDF] Dynamic Task-Oriented Dialogue: A Comparative Study of Llama-2 and BERT in Slot Value Generation

B Magnini - The 18th Conference of the European Chapter of the …, 2024 - aclanthology.org
Recent advancements in instruction-based language models have demonstrated
exceptional performance across various natural language processing tasks. We present a …

Frugal prompting for dialog models

B Santra, S Basak, A De, M Gupta, P Goyal - arXiv preprint arXiv …, 2023 - arxiv.org
The use of large language models (LLMs) in natural language processing (NLP) tasks is
rapidly increasing, leading to changes in how researchers approach problems in the field …

CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues

MN Sreedhar, T Rebedea, S Ghosh… - arXiv preprint arXiv …, 2024 - arxiv.org
Recent advancements in instruction-tuning datasets have predominantly focused on specific
tasks like mathematical or logical reasoning. There has been a notable gap in data …

Understanding the effectiveness of very large language models on dialog evaluation

J Huynh, C Jiao, P Gupta, S Mehri, P Bajaj… - arXiv preprint arXiv …, 2023 - arxiv.org
Language models have steadily increased in size over the past few years. They achieve a
high level of performance on various natural language processing (NLP) tasks such as …

Raw Text is All you Need: Knowledge-intensive Multi-turn Instruction Tuning for Large Language Model

X Hou, Q Li, J Yang, T Li, L Chai, X Wu, H Ji… - arXiv preprint arXiv …, 2024 - arxiv.org
Instruction tuning as an effective technique aligns the outputs of large language models
(LLMs) with human preference. But how to generate the seasonal multi-turn dialogues from …

A comparative study on language models for task-oriented dialogue systems

V Marselino Andreas, G Indra Winata… - arXiv e …, 2022 - ui.adsabs.harvard.edu
The recent development of language models has shown promising results by achieving
state-of-the-art performance on various natural language tasks by fine-tuning pretrained …

Context-dependent Instruction Tuning for Dialogue Response Generation

JM Kwak, M Kim, SJ Hwang - arXiv preprint arXiv:2311.07006, 2023 - arxiv.org
Recent language models have achieved impressive performance in natural language tasks
by incorporating instructions with task input during fine-tuning. Since all samples in the same …

Injecting domain knowledge in language models for task-oriented dialogue systems

D Emelin, D Bonadiman, S Alqahtani, Y Zhang… - arXiv preprint arXiv …, 2022 - arxiv.org
Pre-trained language models (PLM) have advanced the state-of-the-art across NLP
applications, but lack domain-specific knowledge that does not naturally occur in pre …

Prompting frameworks for large language models: A survey

X Liu, J Wang, J Sun, X Yuan, G Dong, P Di… - arXiv preprint arXiv …, 2023 - arxiv.org
Since the launch of ChatGPT, a powerful AI Chatbot developed by OpenAI, large language
models (LLMs) have made significant advancements in both academia and industry …