Neural collapse: A review on modelling principles and generalization

V Kothapalli - arXiv preprint arXiv:2206.04041, 2022 - arxiv.org
Deep classifier neural networks enter the terminal phase of training (TPT) when training
error reaches zero and tend to exhibit intriguing Neural Collapse (NC) properties. Neural …

On the optimization landscape of neural collapse under mse loss: Global optimality with unconstrained features

J Zhou, X Li, T Ding, C You, Q Qu… - … on Machine Learning, 2022 - proceedings.mlr.press
When training deep neural networks for classification tasks, an intriguing empirical
phenomenon has been widely observed in the last-layer classifiers and features, where (i) …

Demystifying structural disparity in graph neural networks: Can one size fit all?

H Mao, Z Chen, W Jin, H Han, Y Ma… - Advances in neural …, 2024 - proceedings.neurips.cc
Abstract Recent studies on Graph Neural Networks (GNNs) provide both empirical and
theoretical evidence supporting their effectiveness in capturing structural patterns on both …

Imbalance trouble: Revisiting neural-collapse geometry

C Thrampoulidis, GR Kini… - Advances in Neural …, 2022 - proceedings.neurips.cc
Neural Collapse refers to the remarkable structural properties characterizing the geometry of
class embeddings and classifier weights, found by deep nets when trained beyond zero …

Feature learning in deep classifiers through intermediate neural collapse

A Rangamani, M Lindegaard… - International …, 2023 - proceedings.mlr.press
In this paper, we conduct an empirical study of the feature learning process in deep
classifiers. Recent research has identified a training phenomenon called Neural Collapse …

Improving self-supervised learning by characterizing idealized representations

Y Dubois, S Ermon, TB Hashimoto… - Advances in Neural …, 2022 - proceedings.neurips.cc
Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear
what characteristics of their representations lead to high downstream accuracies. In this …

Neural collapse with normalized features: A geometric analysis over the riemannian manifold

C Yaras, P Wang, Z Zhu… - Advances in neural …, 2022 - proceedings.neurips.cc
When training overparameterized deep networks for classification tasks, it has been widely
observed that the learned features exhibit a so-called" neural collapse'" phenomenon. More …

Perfectly balanced: Improving transfer and robustness of supervised contrastive learning

M Chen, DY Fu, A Narayan, M Zhang… - International …, 2022 - proceedings.mlr.press
An ideal learned representation should display transferability and robustness. Supervised
contrastive learning (SupCon) is a promising method for training accurate models, but …

Image2point: 3d point-cloud understanding with 2d image pretrained models

C Xu, S Yang, T Galanti, B Wu, X Yue, B Zhai… - … on Computer Vision, 2022 - Springer
Abstract 3D point-clouds and 2D images are different visual representations of the physical
world. While human vision can understand both representations, computer vision models …

Reverse engineering self-supervised learning

I Ben-Shaul, R Shwartz-Ziv, T Galanti… - Advances in …, 2023 - proceedings.neurips.cc
Understanding the learned representation and underlying mechanisms of Self-Supervised
Learning (SSL) often poses a challenge. In this paper, we 'reverse engineer'SSL, conducting …