Neural document ranking models perform impressively well due to superior language understanding gained from pre-training tasks. However, due to their complexity and large …
J Singh, A Anand - Proceedings of the 2020 Conference on Fairness …, 2020 - dl.acm.org
A key problem in information retrieval is understanding the latent intention of a user's under- specified query. Retrieval models that are able to correctly uncover the query intent often …
Ranking documents using Large Language Models (LLMs) by directly feeding the query and candidate documents into the prompt is an interesting and practical problem. However …
Large language models are often ranked according to their level of alignment with human preferences--a model is better than other models if its outputs are more frequently preferred …
Large language models (LLMs) exhibit positional bias in how they use context, which especially complicates listwise ranking. To address this, we propose permutation self …
GS Pîrtoacă, T Rebedea, S Ruseti - arXiv preprint arXiv:1909.00596, 2019 - arxiv.org
Answering multiple-choice questions in a setting in which no supporting documents are explicitly provided continues to stand as a core problem in natural language processing. The …
L Zhang, Y Zhang, D Long, P Xie… - Findings of the …, 2024 - aclanthology.org
Text ranking is a critical task in information retrieval. Recent advances in pre-trained language models (PLMs), especially large language models (LLMs), present new …
B Nouriinanloo, M Lamothe - arXiv preprint arXiv:2406.18740, 2024 - arxiv.org
Large Language Models (LLMs) have been revolutionizing a myriad of natural language processing tasks with their diverse zero-shot capabilities. Indeed, existing work has shown …
Comparative reasoning plays a crucial role in text preference prediction; however, large language models (LLMs) often demonstrate inconsistencies in their reasoning. While …