Continual learning for natural language generation in task-oriented dialog systems

F Mi, L Chen, M Zhao, M Huang, B Faltings - arXiv preprint arXiv …, 2020 - arxiv.org
Natural language generation (NLG) is an essential component of task-oriented dialog
systems. Despite the recent success of neural approaches for NLG, they are typically …

Continual learning for task-oriented dialogue system with iterative network pruning, expanding and masking

B Geng, F Yuan, Q Xu, Y Shen, R Xu… - arXiv preprint arXiv …, 2021 - arxiv.org
This ability to learn consecutive tasks without forgetting how to perform previously trained
problems is essential for developing an online dialogue system. This paper proposes an …

Prompt conditioned vae: Enhancing generative replay for lifelong learning in task-oriented dialogue

Y Zhao, Y Zheng, Z Tian, C Gao, B Yu, H Yu… - arXiv preprint arXiv …, 2022 - arxiv.org
Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD) systems. To
address the catastrophic forgetting issue of LL, generative replay methods are widely …

Continual prompt tuning for dialog state tracking

Q Zhu, B Li, F Mi, X Zhu, M Huang - arXiv preprint arXiv:2203.06654, 2022 - arxiv.org
A desirable dialog system should be able to continually learn new skills without forgetting
old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually …

AugNLG: Few-shot natural language generation using self-trained data augmentation

X Xu, G Wang, YB Kim, S Lee - arXiv preprint arXiv:2106.05589, 2021 - arxiv.org
Natural Language Generation (NLG) is a key component in a task-oriented dialogue system,
which converts the structured meaning representation (MR) to the natural language. For …

Few-shot NLG with pre-trained language model

Z Chen, H Eavani, W Chen, Y Liu, WY Wang - arXiv preprint arXiv …, 2019 - arxiv.org
Neural-based end-to-end approaches to natural language generation (NLG) from structured
data or knowledge are data-hungry, making their adoption for real-world applications difficult …

Distill and replay for continual language learning

J Sun, S Wang, J Zhang, C Zong - Proceedings of the 28th …, 2020 - aclanthology.org
Accumulating knowledge to tackle new tasks without necessarily forgetting the old ones is a
hallmark of human-like intelligence. But the current dominant paradigm of machine learning …

Meta-learning for low-resource natural language generation in task-oriented dialogue systems

F Mi, M Huang, J Zhang, B Faltings - arXiv preprint arXiv:1905.05644, 2019 - arxiv.org
Natural language generation (NLG) is an essential component of task-oriented dialogue
systems. Despite the recent success of neural approaches for NLG, they are typically …

Continual learning in task-oriented dialogue systems

A Madotto, Z Lin, Z Zhou, S Moon, P Crook… - arXiv preprint arXiv …, 2020 - arxiv.org
Continual learning in task-oriented dialogue systems can allow us to add new domains and
functionalities through time without incurring the high cost of a whole system retraining. In …

Continual pre-training of language models

Z Ke, Y Shao, H Lin, T Konishi, G Kim, B Liu - arXiv preprint arXiv …, 2023 - arxiv.org
Language models (LMs) have been instrumental for the rapid advance of natural language
processing. This paper studies continual pre-training of LMs, in particular, continual domain …