A Braylan, M Marabella, O Alonso, M Lease - Journal of Artificial Intelligence …, 2023 - jair.org
Human annotations are vital to supervised learning, yet annotators often disagree on the correct label, especially as annotation tasks increase in complexity. A common strategy to …
Crowdsourcing plays a vital role in today's AI industry. However, existing crowdsourcing research mainly focuses on those simple tasks that are often formulated as label …
When annotators label data, a key metric for quality assurance is inter-annotator agreement (IAA): the extent to which annotators agree on their labels. Though many IAA measures exist …
J Li - ICASSP 2024-2024 IEEE International Conference on …, 2024 - ieeexplore.ieee.org
Whether Large Language Models (LLMs) can outperform crowdsourcing on the data annotation task is attracting interest recently. Some works verified this issue with the average …
J Li - Proceedings of the 43rd International ACM SIGIR …, 2020 - dl.acm.org
The crowd is cheaper and easier to access than the oracle to collect the ground truth data for training and evaluating models. To ensure the quality of the crowdsourced data, people can …
Quality control is a crux of crowdsourcing. While most means for quality control are organizational and imply worker selection, golden tasks, and post-acceptance …
In this paper, we explore different ways of training a model for handwritten text recognition when multiple imperfect or noisy transcriptions are available. We consider various training …
D Ustalov, N Pavlichenko, B Tseitlin - arXiv preprint arXiv:2109.08584, 2021 - arxiv.org
Quality control is a crux of crowdsourcing. While most means for quality control are organizational and imply worker selection, golden tasks, and post-acceptance …
A Drutsa, D Ustalov, V Fedorova… - Proceedings of the …, 2021 - aclanthology.org
In this tutorial, we present a portion of unique industry experience in efficient natural language data annotation via crowdsourcing shared by both leading researchers and …