Sparsity-guided holistic explanation for llms with interpretable inference-time intervention

Z Tan, T Chen, Z Zhang, H Liu - … of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Abstract Large Language Models (LLMs) have achieved unprecedented breakthroughs in
various natural language processing domains. However, the enigmatic``black-box''nature of …

Noise-robust fine-tuning of pretrained language models via external guidance

S Wang, Z Tan, R Guo, J Li - arXiv preprint arXiv:2311.01108, 2023 - arxiv.org
Adopting a two-stage paradigm of pretraining followed by fine-tuning, Pretrained Language
Models (PLMs) have achieved substantial advancements in the field of natural language …

[HTML][HTML] A review on label cleaning techniques for learning with noisy labels

J Shin, J Won, HS Lee, JW Lee - ICT Express, 2024 - Elsevier
Classification models categorize objects into given classes, guided by training samples with
input features and labels. In practice, however, labels can be corrupted by human error or …

Robust Commonsense Reasoning Against Noisy Labels Using Adaptive Correction

X Yang, C Deng, K Wei, D Tao - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Commonsense reasoning based on knowledge graphs (KGs) is a challenging task that
requires predicting complex questions over the described textual contexts and relevant …

Decoding class dynamics in learning with noisy labels

A Tatjer, B Nagarajan, R Marques, P Radeva - Pattern Recognition Letters, 2024 - Elsevier
The creation of large-scale datasets annotated by humans inevitably introduces noisy
labels, leading to reduced generalization in deep-learning models. Sample selection-based …

Object Detectors in the Open Environment: Challenges, Solutions, and Outlook

S Liang, W Wang, R Chen, A Liu, B Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
With the emergence of foundation models, deep learning-based object detectors have
shown practical usability in closed set scenarios. However, for real-world tasks, object …

SplitNet: learnable clean-noisy label splitting for learning with noisy labels

D Kim, K Ryoo, H Cho, S Kim - International Journal of Computer Vision, 2024 - Springer
Annotating the dataset with high-quality labels is crucial for deep networks' performance, but
in real-world scenarios, the labels are often contaminated by noise. To address this, some …

[HTML][HTML] BPT-PLR: A Balanced Partitioning and Training Framework with Pseudo-Label Relaxed Contrastive Loss for Noisy Label Learning

Q Zhang, G Jin, Y Zhu, H Wei, Q Chen - Entropy, 2024 - mdpi.com
While collecting training data, even with the manual verification of experts from
crowdsourcing platforms, eliminating incorrect annotations (noisy labels) completely is …

ChiMera: Learning with noisy labels by contrasting mixed-up augmentations

Z Liu, X Zhang, J He, D Fu, D Samaras, R Tan… - arXiv preprint arXiv …, 2023 - arxiv.org
Learning with noisy labels has been studied to address incorrect label annotations in real-
world applications. In this paper, we present ChiMera, a two-stage learning-from-noisy …

Label-Noise Robust Diffusion Models

B Na, Y Kim, HS Bae, JH Lee, SJ Kwon, W Kang… - arXiv preprint arXiv …, 2024 - arxiv.org
Conditional diffusion models have shown remarkable performance in various generative
tasks, but training them requires large-scale datasets that often contain noise in conditional …