Generalized neural collapse for a large number of classes

J Jiang, J Zhou, P Wang, Q Qu, D Mixon, C You… - arXiv preprint arXiv …, 2023 - arxiv.org
Neural collapse provides an elegant mathematical characterization of learned last layer
representations (aka features) and classifier weights in deep classification models. Such …

Targeted Representation Alignment for Open-World Semi-Supervised Learning

R Xiao, L Feng, K Tang, J Zhao, Y Li… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Open-world Semi-Supervised Learning aims to classify unlabeled samples utilizing
information from labeled data while unlabeled samples are not only from the labeled known …

Hierarchical task-incremental learning with feature-space initialization inspired by neural collapse

Q Zhou, X Xiang, J Ma - Neural Processing Letters, 2023 - Springer
Incremental learning models need to update the categories and their conceptual
understanding over time. The current research has placed more emphasis on learning new …

Linguistic Collapse: Neural Collapse in (Large) Language Models

R Wu, V Papyan - arXiv preprint arXiv:2405.17767, 2024 - arxiv.org
Neural collapse ($\mathcal {NC} $) is a phenomenon observed in classification tasks where
top-layer representations collapse into their class means, which become equinorm …

Quantifying the variability collapse of neural networks

J Xu, H Liu - International Conference on Machine Learning, 2023 - proceedings.mlr.press
Recent studies empirically demonstrate the positive relationship between the transferability
of neural networks and the in-class variation of the last layer features. The recently …

Label correction using contrastive prototypical classifier for noisy label learning

C Xu, R Lin, J Cai, S Wang - Information Sciences, 2023 - Elsevier
Deep neural networks typically require a large number of accurately labeled images for
training with cross-entropy loss, and often overfit noisy labels. Contrastive learning has …

Neural Collapse for Cross-entropy Class-Imbalanced Learning with Unconstrained ReLU Feature Model

H Dang, T Tran, T Nguyen, N Ho - arXiv preprint arXiv:2401.02058, 2024 - arxiv.org
The current paradigm of training deep neural networks for classification tasks includes
minimizing the empirical risk that pushes the training loss value towards zero, even after the …

Evaluating the Fairness of Neural Collapse in Medical Image Classification

K Mouheb, M Elbatel, S Klein, EE Bron - arXiv preprint arXiv:2407.05843, 2024 - arxiv.org
Deep learning has achieved impressive performance across various medical imaging tasks.
However, its inherent bias against specific groups hinders its clinical applicability in …

Coordinated Sparse Recovery of Label Noise

Y Yang, N Wang, H Yang, R Li - arXiv preprint arXiv:2404.04800, 2024 - arxiv.org
Label noise is a common issue in real-world datasets that inevitably impacts the
generalization of models. This study focuses on robust classification tasks where the label …

Towards Reliable Link Prediction with Robust Graph Information Bottleneck

Z Zhou, J Yao, J Liu, X Guo, LI He, S Yuan, L Wang… - 2023 - openreview.net
Link prediction on graphs has achieved great success with the rise of deep graph learning.
However, the potential robustness under the edge noise is less investigated. We reveal that …