Creation of reliable relevance judgments in information retrieval systems evaluation experimentation through crowdsourcing: a review

P Samimi, SD Ravana - The Scientific World Journal, 2014 - Wiley Online Library
Test collection is used to evaluate the information retrieval systems in laboratory‐based
evaluation experimentation. In a classic setting, generating relevance judgments involves …

DepreSym: A Depression Symptom Annotated Corpus and the Role of LLMs as Assessors of Psychological Markers

A Pérez, M Fernández-Pichel, J Parapar… - arXiv preprint arXiv …, 2023 - arxiv.org
Computational methods for depression detection aim to mine traces of depression from
online publications posted by Internet users. However, solutions trained on existing …

Reliable Information Retrieval Systems Performance Evaluation: A Review

MH Joseph, SD Ravana - IEEE Access, 2024 - ieeexplore.ieee.org
With the progressive and availability of various search tools, interest in the evaluation of
information retrieval based on user perspective has grown tremendously among …

Studying topical relevance with evidence-based crowdsourcing

O Inel, G Haralabopoulos, D Li, C Van Gysel… - Proceedings of the 27th …, 2018 - dl.acm.org
Information Retrieval systems rely on large test collections to measure their effectiveness in
retrieving relevant documents. While the demand is high, the task of creating such test …

Mastering web mining and information retrieval in the digital age

K Kasemsap - Web usage mining techniques and applications …, 2017 - igi-global.com
This chapter aims to master web mining and Information Retrieval (IR) in the digital age, thus
describing the overviews of web mining and web usage mining; the significance of web …

Question answering track evaluation in TREC, CLEF and NTCIR

MD Olvera-Lobo, J Gutiérrez-Artacho - New Contributions in Information …, 2015 - Springer
Abstract Question Answering (QA) Systems are put forward as a real alternative to
Information Retrieval systems as they provide the user with a fast and comprehensible …

Intelligent topic selection for low-cost information retrieval evaluation: A New perspective on deep vs. shallow judging

M Kutlu, T Elsayed, M Lease - Information Processing & Management, 2018 - Elsevier
While test collections provide the cornerstone for Cranfield-based evaluation of information
retrieval (IR) systems, it has become practically infeasible to rely on traditional pooling …

Correlation, prediction and ranking of evaluation metrics in information retrieval

S Gupta, M Kutlu, V Khetan, M Lease - … 14–18, 2019, Proceedings, Part I …, 2019 - Springer
Given limited time and space, IR studies often report few evaluation metrics which must be
carefully selected. To inform such selection, we first quantify correlation between 23 popular …

Improving the accuracy of the information retrieval evaluation process by considering unjudged document lists from the relevant judgment sets

MH Joseph, SD Ravana - Information Research an international …, 2024 - publicera.kb.se
Introduction. To improve user satisfaction and loyalty to the search engines, the performance
of the retrieval systems has to be better in terms of the number of relevant documents …

Validez y fiabilidad del Researcher ID y de «Web of Science Production of Spanish Psychology»

JA Olivas-Ávila, B Musi-Lechuga - International Journal of Clinical and …, 2014 - Elsevier
La creación de sistemas integradores de productos de investigación, como el Researcher
ID de Thomson Reuters, ha sido una necesidad emergente debido a lo complejo que es …