Large foundation models, including large language models (LLMs), vision transformers (ViTs), diffusion, and LLM-based multimodal models, are revolutionizing the entire machine …
L Fan, J Liu, Z Liu, D Lo, X Xia, S Li - ACM Transactions on Software …, 2024 - dl.acm.org
Developers deal with code-change-related tasks daily, eg, reviewing code. Pre-trained code and code-change-oriented models have been adapted to help developers with such tasks …
Large foundation models, including large language models, vision transformers, diffusion, and large language model based multimodal models, are revolutionizing the entire machine …
While large language models (LLMs) like ChatGPT have shown impressive capabilities in Natural Language Processing (NLP) tasks, a systematic investigation of their potential in this …
Y Ouyang, Y Cao, Y Gao, Z Wu, J Zhang… - Proceedings of the 61st …, 2023 - aclanthology.org
Abstract Out-of-distribution (OOD) detection, a fundamental task vexing real-world applications, has attracted growing attention in the NLP community. Recently fine-tuning …
Label scarcity is a bottleneck for improving task performance in specialized domains. We propose a novel compositional transfer learning framework (DoT5) for zero-shot domain …
Existing knowledge-grounded conversation systems generate responses typically in a retrieve-then-generate manner. They require a large knowledge base and a strong …
Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text …
Traditional dialogue summarization models rely on a large-scale manually-labeled corpus, lacking generalization ability to new domains, and domain adaptation from a labeled source …