Large language models for information retrieval: A survey

Y Zhu, H Yuan, S Wang, J Liu, W Liu, C Deng… - arXiv preprint arXiv …, 2023 - arxiv.org
As a primary means of information acquisition, information retrieval (IR) systems, such as
search engines, have integrated themselves into our daily lives. These systems also serve …

Large language models are effective text rankers with pairwise ranking prompting

Z Qin, R Jagerman, K Hui, H Zhuang, J Wu… - arXiv preprint arXiv …, 2023 - arxiv.org
Ranking documents using Large Language Models (LLMs) by directly feeding the query and
candidate documents into the prompt is an interesting and practical problem. However …

Dense text retrieval based on pretrained language models: A survey

WX Zhao, J Liu, R Ren, JR Wen - ACM Transactions on Information …, 2024 - dl.acm.org
Text retrieval is a long-standing research topic on information seeking, where a system is
required to return relevant information resources to user's queries in natural language. From …

Rankvicuna: Zero-shot listwise document reranking with open-source large language models

R Pradeep, S Sharifymoghaddam, J Lin - arXiv preprint arXiv:2309.15088, 2023 - arxiv.org
Researchers have successfully applied large language models (LLMs) such as ChatGPT to
reranking in an information retrieval context, but to date, such work has mostly been built on …

Inpars-v2: Large language models as efficient dataset generators for information retrieval

V Jeronymo, L Bonifacio, H Abonizio, M Fadaee… - arXiv preprint arXiv …, 2023 - arxiv.org
Recently, InPars introduced a method to efficiently use large language models (LLMs) in
information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant …

How does generative retrieval scale to millions of passages?

R Pradeep, K Hui, J Gupta, AD Lelkes… - arXiv preprint arXiv …, 2023 - arxiv.org
Popularized by the Differentiable Search Index, the emerging paradigm of generative
retrieval re-frames the classic information retrieval problem into a sequence-to-sequence …

Fine-tuning llama for multi-stage text retrieval

X Ma, L Wang, N Yang, F Wei, J Lin - Proceedings of the 47th …, 2024 - dl.acm.org
While large language models (LLMs) have shown impressive NLP capabilities, existing IR
applications mainly focus on prompting LLMs to generate query expansions or generating …

Beyond yes and no: Improving zero-shot llm rankers via scoring fine-grained relevance labels

H Zhuang, Z Qin, K Hui, J Wu, L Yan, X Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Zero-shot text rankers powered by recent LLMs achieve remarkable ranking performance by
simply prompting. Existing prompts for pointwise LLM rankers mostly ask the model to …

RankZephyr: Effective and Robust Zero-Shot Listwise Reranking is a Breeze!

R Pradeep, S Sharifymoghaddam, J Lin - arXiv preprint arXiv:2312.02724, 2023 - arxiv.org
In information retrieval, proprietary large language models (LLMs) such as GPT-4 and open-
source counterparts such as LLaMA and Vicuna have played a vital role in reranking …

Found in the middle: Permutation self-consistency improves listwise ranking in large language models

R Tang, X Zhang, X Ma, J Lin, F Ture - arXiv preprint arXiv:2310.07712, 2023 - arxiv.org
Large language models (LLMs) exhibit positional bias in how they use context, which
especially complicates listwise ranking. To address this, we propose permutation self …