Unsupervised contrast-consistent ranking with language models

N Stoehr, P Cheng, J Wang, D Preotiuc-Pietro… - arXiv preprint arXiv …, 2023 - arxiv.org
Language models contain ranking-based knowledge and are powerful solvers of in-context
ranking tasks. For instance, they may have parametric knowledge about the ordering of …

Extractive explanations for interpretable text ranking

J Leonhardt, K Rudra, A Anand - ACM Transactions on Information …, 2023 - dl.acm.org
Neural document ranking models perform impressively well due to superior language
understanding gained from pre-training tasks. However, due to their complexity and large …

Model agnostic interpretability of rankers via intent modelling

J Singh, A Anand - Proceedings of the 2020 Conference on Fairness …, 2020 - dl.acm.org
A key problem in information retrieval is understanding the latent intention of a user's under-
specified query. Retrieval models that are able to correctly uncover the query intent often …

Large language models are effective text rankers with pairwise ranking prompting

Z Qin, R Jagerman, K Hui, H Zhuang, J Wu… - arXiv preprint arXiv …, 2023 - arxiv.org
Ranking documents using Large Language Models (LLMs) by directly feeding the query and
candidate documents into the prompt is an interesting and practical problem. However …

Prediction-Powered Ranking of Large Language Models

I Chatzi, E Straitouri, S Thejaswi… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models are often ranked according to their level of alignment with human
preferences--a model is better than other models if its outputs are more frequently preferred …

Found in the middle: Permutation self-consistency improves listwise ranking in large language models

R Tang, X Zhang, X Ma, J Lin, F Ture - arXiv preprint arXiv:2310.07712, 2023 - arxiv.org
Large language models (LLMs) exhibit positional bias in how they use context, which
especially complicates listwise ranking. To address this, we propose permutation self …

Answering questions by learning to rank--Learning to rank by answering questions

GS Pîrtoacă, T Rebedea, S Ruseti - arXiv preprint arXiv:1909.00596, 2019 - arxiv.org
Answering multiple-choice questions in a setting in which no supporting documents are
explicitly provided continues to stand as a core problem in natural language processing. The …

A two-stage adaptation of large language models for text ranking

L Zhang, Y Zhang, D Long, P Xie… - Findings of the …, 2024 - aclanthology.org
Text ranking is a critical task in information retrieval. Recent advances in pre-trained
language models (PLMs), especially large language models (LLMs), present new …

Re-Ranking Step by Step: Investigating Pre-Filtering for Re-Ranking with Large Language Models

B Nouriinanloo, M Lamothe - arXiv preprint arXiv:2406.18740, 2024 - arxiv.org
Large Language Models (LLMs) have been revolutionizing a myriad of natural language
processing tasks with their diverse zero-shot capabilities. Indeed, existing work has shown …

On what basis? predicting text preference via structured comparative reasoning

JN Yan, T Liu, JT Chiu, J Shen, Z Qin, Y Yu… - arXiv preprint arXiv …, 2023 - arxiv.org
Comparative reasoning plays a crucial role in text preference prediction; however, large
language models (LLMs) often demonstrate inconsistencies in their reasoning. While …