Quantifying the invisible labor in crowd work

C Toxtli, S Suri, S Savage - Proceedings of the ACM on human-computer …, 2021 - dl.acm.org
Crowdsourcing markets provide workers with a centralized place to find paid work. What
may not be obvious at first glance is that, in addition to the work they do for pay, crowd …

Asking Clarifying Questions: To benefit or to disturb users in Web search?

J Zou, A Sun, C Long, M Aliannejadi… - Information Processing & …, 2023 - Elsevier
Modern information-seeking systems are becoming more interactive, mainly through asking
Clarifying Questions (CQs) to refine users' information needs. System-generated CQs may …

Users meet clarifying questions: Toward a better understanding of user interactions for search clarification

J Zou, M Aliannejadi, E Kanoulas, MS Pera… - ACM Transactions on …, 2023 - dl.acm.org
The use of clarifying questions (CQs) is a fairly new and useful technique to aid systems in
recognizing the intent, context, and preferences behind user queries. Yet, understanding the …

Annotator rationales for labeling tasks in crowdsourcing

M Kutlu, T McDonnell, T Elsayed, M Lease - Journal of Artificial Intelligence …, 2020 - jair.org
When collecting item ratings from human judges, it can be difficult to measure and enforce
data quality due to task subjectivity and lack of transparency into how judges make each …

Inpars-light: Cost-effective unsupervised training of efficient rankers

L Boytsov, P Patel, V Sourabh, R Nisar… - arXiv preprint arXiv …, 2023 - arxiv.org
We carried out a reproducibility study of InPars recipe for unsupervised training of neural
rankers. As a by-product of this study, we developed a simple-yet-effective modification of …

Crowdco-op: Sharing risks and rewards in crowdsourcing

S Fan, U Gadiraju, A Checco, G Demartini - Proceedings of the ACM on …, 2020 - dl.acm.org
Paid micro-task crowdsourcing has gained in popularity partly due to the increasing need for
large-scale manually labelled datasets which are often used to train and evaluate Artificial …

On the effect of relevance scales in crowdsourcing relevance assessments for Information Retrieval evaluation

K Roitero, E Maddalena, S Mizzaro… - Information Processing & …, 2021 - Elsevier
Relevance is a key concept in information retrieval and widely used for the evaluation of
search systems using test collections. We present a comprehensive study of the effect of the …

On the role of human and machine metadata in relevance judgment tasks

J Xu, L Han, S Sadiq, G Demartini - Information Processing & Management, 2023 - Elsevier
In order to evaluate the effectiveness of Information Retrieval (IR) systems it is key to collect
relevance judgments from human assessors. Crowdsourcing has successfully been used as …

On the Impact of Showing Evidence from Peers in Crowdsourced Truthfulness Assessments

J Xu, L Han, S Sadiq, G Demartini - ACM Transactions on Information …, 2024 - dl.acm.org
Misinformation has been rapidly spreading online. The common approach to dealing with it
is deploying expert fact-checkers who follow forensic processes to identify the veracity of …

A systematic evaluation of transfer learning and pseudo-labeling with BERT-based ranking models

I Mokrii, L Boytsov, P Braslavski - … of the 44th International ACM SIGIR …, 2021 - dl.acm.org
Due to high annotation costs making the best use of existing human-created training data is
an important research direction. We, therefore, carry out a systematic evaluation of …