Fine-grained classification with noisy labels

Q Wei, L Feng, H Sun, R Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Learning with noisy labels (LNL) aims to ensure model generalization given a label-
corrupted training set. In this work, we investigate a rarely studied scenario of LNL on fine …

Mitigating memorization of noisy labels by clipping the model prediction

H Wei, H Zhuang, R Xie, L Feng… - International …, 2023 - proceedings.mlr.press
In the presence of noisy labels, designing robust loss functions is critical for securing the
generalization performance of deep neural networks. Cross Entropy (CE) loss has been …

Weak proxies are sufficient and preferable for fairness with missing sensitive attributes

Z Zhu, Y Yao, J Sun, H Li, Y Liu - … Conference on Machine …, 2023 - proceedings.mlr.press
Evaluating fairness can be challenging in practice because the sensitive attributes of data
are often inaccessible due to privacy constraints. The go-to approach that the industry …

To aggregate or not? learning with separate noisy labels

J Wei, Z Zhu, T Luo, E Amid, A Kumar… - Proceedings of the 29th …, 2023 - dl.acm.org
The rawly collected training data often comes with separate noisy labels collected from
multiple imperfect annotators (eg, via crowdsourcing). A typical way of using these separate …

Unmasking and improving data credibility: A study with datasets for training harmless language models

Z Zhu, J Wang, H Cheng, Y Liu - arXiv preprint arXiv:2311.11202, 2023 - arxiv.org
Language models have shown promise in various tasks but can be affected by undesired
data during training, fine-tuning, or alignment. For example, if some unsafe conversations …

Understanding and mitigating the label noise in pre-training on downstream tasks

H Chen, J Wang, A Shah, R Tao, H Wei, X Xie… - arXiv preprint arXiv …, 2023 - arxiv.org
Pre-training on large-scale datasets and then fine-tuning on downstream tasks have
become a standard practice in deep learning. However, pre-training data often contain label …

Learning with noisy foundation models

H Chen, J Wang, Z Wang, R Tao, H Wei, X Xie… - arXiv preprint arXiv …, 2024 - arxiv.org
Foundation models are usually pre-trained on large-scale datasets and then adapted to
downstream tasks through tuning. However, the large-scale pre-training datasets, often …

Sample Self-Selection Using Dual Teacher Networks for Pathological Image Classification with Noisy Labels

G Han, W Guo, H Zhang, J Jin, X Gan, X Zhao - Computers in Biology and …, 2024 - Elsevier
Deep neural networks (DNNs) involve advanced image processing but depend on large
quantities of high-quality labeled data. The presence of noisy data significantly degrades the …

FedFixer: Mitigating Heterogeneous Label Noise in Federated Learning

X Ji, Z Zhu, W Xi, O Gadyatskaya, Z Song… - Proceedings of the …, 2024 - ojs.aaai.org
Federated Learning (FL) heavily depends on label quality for its performance. However, the
label distribution among individual clients is always both noisy and heterogeneous. The …

Learning Discriminative Dynamics with Label Corruption for Noisy Label Detection

S Kim, D Lee, SK Kang, S Chae… - Proceedings of the …, 2024 - openaccess.thecvf.com
Label noise commonly found in real-world datasets has a detrimental impact on a model's
generalization. To effectively detect incorrectly labeled instances previous works have …