Adopting a two-stage paradigm of pretraining followed by fine-tuning, Pretrained Language Models (PLMs) have achieved substantial advancements in the field of natural language …
Classification models categorize objects into given classes, guided by training samples with input features and labels. In practice, however, labels can be corrupted by human error or …
Commonsense reasoning based on knowledge graphs (KGs) is a challenging task that requires predicting complex questions over the described textual contexts and relevant …
The creation of large-scale datasets annotated by humans inevitably introduces noisy labels, leading to reduced generalization in deep-learning models. Sample selection-based …
With the emergence of foundation models, deep learning-based object detectors have shown practical usability in closed set scenarios. However, for real-world tasks, object …
D Kim, K Ryoo, H Cho, S Kim - International Journal of Computer Vision, 2024 - Springer
Annotating the dataset with high-quality labels is crucial for deep networks' performance, but in real-world scenarios, the labels are often contaminated by noise. To address this, some …
Q Zhang, G Jin, Y Zhu, H Wei, Q Chen - Entropy, 2024 - mdpi.com
While collecting training data, even with the manual verification of experts from crowdsourcing platforms, eliminating incorrect annotations (noisy labels) completely is …
Learning with noisy labels has been studied to address incorrect label annotations in real- world applications. In this paper, we present ChiMera, a two-stage learning-from-noisy …
Conditional diffusion models have shown remarkable performance in various generative tasks, but training them requires large-scale datasets that often contain noise in conditional …