CADS: A Systematic Literature Review on the Challenges of Abstractive Dialogue Summarization

F Kirstein, JP Wahle, B Gipp, T Ruas - Journal of Artificial Intelligence …, 2025 - jair.org
Abstractive dialogue summarization is the task of distilling conversations into informative
and concise summaries. Although focused reviews have been conducted on this topic, there …

Domain adaptation for subjective induction questions answering on products by adversarial disentangled learning

Y Zhang, J Yu, Y Rao, L Zheng, Q Su… - Proceedings of the …, 2024 - aclanthology.org
This paper focuses on answering subjective questions about products. Different from the
factoid question with a single answer span, this subjective one involves multiple viewpoints …

A Practical Guide to Fine-tuning Language Models with Limited Data

M Szép, D Rueckert, R von Eisenhart-Rothe… - arXiv preprint arXiv …, 2024 - arxiv.org
Employing pre-trained Large Language Models (LLMs) has become the de facto standard in
Natural Language Processing (NLP) despite their extensive data requirements. Motivated by …

A Novel Topic Segmentation Approach for Enhanced Dialogue Summarization

Z Ren - 2024 - adwenpub.com
Dialogue summarization aims to distill a given conversation into a brief and focused
summary. The challenge lies in the diverse perspectives of participants and the frequent …

OmniDialog: An Omnipotent Pre-training Model for Task-Oriented Dialogue System

M Yang, SK Ng, J Fu - arXiv preprint arXiv:2312.16864, 2023 - arxiv.org
Pre-trained conversation models (PCMs) have demonstrated remarkable results in task-
oriented dialogue (TOD) systems. Many PCMs focus predominantly on dialogue …

Adaptive: Adaptive Domain Mining for Fine-grained Domain Adaptation Modeling

W Sun, Z Yang, Y Wang, Z Zhang, Z Wang, Y Li… - arXiv preprint arXiv …, 2024 - arxiv.org
Advertising systems often face the multi-domain challenge, where data distributions vary
significantly across scenarios. Existing domain adaptation methods primarily focus on …

MoSLD: An Extremely Parameter-Efficient Mixture-of-Shared LoRAs for Multi-Task Learning

L Zhao, W Zeng, X Shi, H Zhou - arXiv preprint arXiv:2412.08946, 2024 - arxiv.org
Recently, LoRA has emerged as a crucial technique for fine-tuning large pre-trained models,
yet its performance in multi-task learning scenarios often falls short. In contrast, the MoE …

COOL: Comprehensive Knowledge Enhanced Prompt Learning for Domain Adaptive Few-shot Fake News Detection

Y Ouyang, P Wu, L Pan - arXiv preprint arXiv:2406.10870, 2024 - arxiv.org
Most Fake News Detection (FND) methods often struggle with data scarcity for emerging
news domain. Recently, prompt learning based on Pre-trained Language Models (PLM) has …

Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models

C Li, X Liu, Z Zhang, Y Wang, C Liu, Y Lan… - arXiv preprint arXiv …, 2024 - arxiv.org
Recent advances in prompt optimization have notably enhanced the performance of pre-
trained language models (PLMs) on downstream tasks. However, the potential of optimized …

Hypernetwork-Assisted Parameter-Efficient Fine-Tuning with Meta-Knowledge Distillation for Domain Knowledge Disentanglement

C Li, L Wang, X Lin, S Huang, L He - Findings of the Association …, 2024 - aclanthology.org
Abstract Domain adaptation from labeled source domains to the target domain is important
in practical summarization scenarios. However, the key challenge is domain knowledge …