Large language models for information retrieval: A survey

Y Zhu, H Yuan, S Wang, J Liu, W Liu, C Deng… - arXiv preprint arXiv …, 2023 - arxiv.org
As a primary means of information acquisition, information retrieval (IR) systems, such as
search engines, have integrated themselves into our daily lives. These systems also serve …

Large language models are effective text rankers with pairwise ranking prompting

Z Qin, R Jagerman, K Hui, H Zhuang, J Wu… - arXiv preprint arXiv …, 2023 - arxiv.org
Ranking documents using Large Language Models (LLMs) by directly feeding the query and
candidate documents into the prompt is an interesting and practical problem. However …

Ranked List Truncation for Large Language Model-based Re-Ranking

C Meng, N Arabzadeh, A Askari, M Aliannejadi… - Proceedings of the 47th …, 2024 - dl.acm.org
We study ranked list truncation (RLT) from a novel retrieve-then-re-rank perspective, where
we optimize re-ranking by truncating the retrieved list (ie, trim re-ranking candidates). RLT is …

Make large language model a better ranker

WS Chao, Z Zheng, H Zhu, H Liu - arXiv preprint arXiv:2403.19181, 2024 - arxiv.org
Large Language Models (LLMs) demonstrate robust capabilities across various fields,
leading to a paradigm shift in LLM-enhanced Recommender System (RS). Research to date …

Query performance prediction using relevance judgments generated by large language models

C Meng, N Arabzadeh, A Askari, M Aliannejadi… - arXiv preprint arXiv …, 2024 - arxiv.org
Query performance prediction (QPP) aims to estimate the retrieval quality of a search system
for a query without human relevance judgments. Previous QPP methods typically return a …

Demorank: Selecting effective demonstrations for large language models in ranking task

W Liu, Y Zhu, Z Dou - arXiv preprint arXiv:2406.16332, 2024 - arxiv.org
Recently, there has been increasing interest in applying large language models (LLMs) as
zero-shot passage rankers. However, few studies have explored how to select appropriate …

Curriculum Demonstration Selection for In-Context Learning

DA Vu, NTC Duy, X Wu, HM Nhat, D Mingzhe… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have shown strong in-context learning (ICL) abilities with a
few demonstrations. However, one critical challenge is how to select demonstrations to elicit …

Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers

S Chen, BJ Gutiérrez, Y Su - arXiv preprint arXiv:2410.02642, 2024 - arxiv.org
Information retrieval (IR) systems have played a vital role in modern digital life and have
cemented their continued usefulness in this new era of generative AI via retrieval …

Cross-Domain Integration for General Sensor Data Synthesis: Leveraging LLMs and Domain-Specific Generative Models in Collaborative Environments

X Zhou, Y Hu, Q Jia, R Xie - IEEE Sensors Journal, 2024 - ieeexplore.ieee.org
Synthetic data has emerged as a critical component in the fields of machine learning and
data science, providing a solution to overcome limitations associated with real-world data …

Recent advances in text embedding: A Comprehensive Review of Top-Performing Methods on the MTEB Benchmark

H Cao - arXiv preprint arXiv:2406.01607, 2024 - arxiv.org
Text embedding methods have become increasingly popular in both industrial and
academic fields due to their critical role in a variety of natural language processing tasks …