Estimating noise transition matrix with label correlations for noisy multi-label learning

S Li, X Xia, H Zhang, Y Zhan… - Advances in Neural …, 2022 - proceedings.neurips.cc
In label-noise learning, the noise transition matrix, bridging the class posterior for noisy and
clean data, has been widely exploited to learn statistically consistent classifiers. The …

Combating noisy labels with sample selection by mining high-discrepancy examples

X Xia, B Han, Y Zhan, J Yu, M Gong… - Proceedings of the …, 2023 - openaccess.thecvf.com
The sample selection approach is popular in learning with noisy labels. The state-of-the-art
methods train two deep networks simultaneously for sample selection, which aims to employ …

Holistic label correction for noisy multi-label classification

X Xia, J Deng, W Bao, Y Du, B Han… - Proceedings of the …, 2023 - openaccess.thecvf.com
Multi-label classification aims to learn classification models from instances associated with
multiple labels. It is pivotal to learn and utilize the label dependence among multiple labels …

Flatmatch: Bridging labeled data and unlabeled data with cross-sharpness for semi-supervised learning

Z Huang, L Shen, J Yu, B Han… - Advances in Neural …, 2023 - proceedings.neurips.cc
Abstract Semi-Supervised Learning (SSL) has been an effective way to leverage abundant
unlabeled data with extremely scarce labeled data. However, most SSL methods are …

Warm: On the benefits of weight averaged reward models

A Ramé, N Vieillard, L Hussenot, R Dadashi… - arXiv preprint arXiv …, 2024 - arxiv.org
Aligning large language models (LLMs) with human preferences through reinforcement
learning (RLHF) can lead to reward hacking, where LLMs exploit failures in the reward …

Cs-isolate: Extracting hard confident examples by content and style isolation

Y Lin, Y Yao, X Shi, M Gong, X Shen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Label noise widely exists in large-scale image datasets. To mitigate the side effects of label
noise, state-of-the-art methods focus on selecting confident examples by leveraging semi …

Ideal: Influence-driven selective annotations empower in-context learners in large language models

S Zhang, X Xia, Z Wang, LH Chen, J Liu, Q Wu… - arXiv preprint arXiv …, 2023 - arxiv.org
In-context learning is a promising paradigm that utilizes in-context examples as prompts for
the predictions of large language models. These prompts are crucial for achieving strong …

Mitigating memorization of noisy labels by clipping the model prediction

H Wei, H Zhuang, R Xie, L Feng… - International …, 2023 - proceedings.mlr.press
In the presence of noisy labels, designing robust loss functions is critical for securing the
generalization performance of deep neural networks. Cross Entropy (CE) loss has been …

Bicro: Noisy correspondence rectification for multi-modality data via bi-directional cross-modal similarity consistency

S Yang, Z Xu, K Wang, Y You, H Yao… - Proceedings of the …, 2023 - openaccess.thecvf.com
As one of the most fundamental techniques in multimodal learning, cross-modal matching
aims to project various sensory modalities into a shared feature space. To achieve this …

Ot-filter: An optimal transport filter for learning with noisy labels

C Feng, Y Ren, X Xie - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
The success of deep learning is largely attributed to the training over clean data. However,
data is often coupled with noisy labels in practice. Learning with noisy labels is challenging …