Natural language reasoning, a survey

F Yu, H Zhang, P Tiwari, B Wang - ACM Computing Surveys, 2024 - dl.acm.org
This survey article proposes a clearer view of Natural Language Reasoning (NLR) in the
field of Natural Language Processing (NLP), both conceptually and practically …

Transformer: A general framework from machine translation to others

Y Zhao, J Zhang, C Zong - Machine Intelligence Research, 2023 - Springer
Abstract Machine translation is an important and challenging task that aims at automatically
translating natural language sentences from one language into another. Recently …

SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization

P Laban, T Schnabel, PN Bennett… - Transactions of the …, 2022 - direct.mit.edu
In the summarization domain, a key requirement for summaries is to be factually consistent
with the input document. Previous work has found that natural language inference (NLI) …

Approximate nearest neighbor negative contrastive learning for dense text retrieval

L Xiong, C Xiong, Y Li, KF Tang, J Liu… - arXiv preprint arXiv …, 2020 - arxiv.org
Conducting text retrieval in a dense learned representation space has many intriguing
advantages over sparse retrieval. Yet the effectiveness of dense retrieval (DR) often requires …

[图书][B] Pretrained transformers for text ranking: Bert and beyond

J Lin, R Nogueira, A Yates - 2022 - books.google.com
The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in
response to a query. Although the most common formulation of text ranking is search …

Open question answering over tables and text

W Chen, MW Chang, E Schlinger, W Wang… - arXiv preprint arXiv …, 2020 - arxiv.org
In open question answering (QA), the answer to a question is produced by retrieving and
then analyzing documents that might contain answers to the question. Most open QA …

PARADE: Passage Representation Aggregation forDocument Reranking

C Li, A Yates, S MacAvaney, B He, Y Sun - ACM Transactions on …, 2023 - dl.acm.org
Pre-trained transformer models, such as BERT and T5, have shown to be highly effective at
ad hoc passage and document ranking. Due to the inherent sequence length limits of these …

Rethinking search: making domain experts out of dilettantes

D Metzler, Y Tay, D Bahri, M Najork - Acm sigir forum, 2021 - dl.acm.org
When experiencing an information need, users want to engage with a domain expert, but
often turn to an information retrieval system, such as a search engine, instead. Classical …

Longrag: Enhancing retrieval-augmented generation with long-context llms

Z Jiang, X Ma, W Chen - arXiv preprint arXiv:2406.15319, 2024 - arxiv.org
In traditional RAG framework, the basic retrieval units are normally short. The common
retrievers like DPR normally work with 100-word Wikipedia paragraphs. Such a design …

Fine-grained fact verification with kernel graph attention network

Z Liu, C Xiong, M Sun, Z Liu - arXiv preprint arXiv:1910.09796, 2019 - arxiv.org
Fact Verification requires fine-grained natural language inference capability that finds subtle
clues to identify the syntactical and semantically correct but not well-supported claims. This …