When training deep neural networks for classification tasks, an intriguing empirical phenomenon has been widely observed in the last-layer classifiers and features, where (i) …
Abstract Recent studies on Graph Neural Networks (GNNs) provide both empirical and theoretical evidence supporting their effectiveness in capturing structural patterns on both …
Neural Collapse refers to the remarkable structural properties characterizing the geometry of class embeddings and classifier weights, found by deep nets when trained beyond zero …
In this paper, we conduct an empirical study of the feature learning process in deep classifiers. Recent research has identified a training phenomenon called Neural Collapse …
Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear what characteristics of their representations lead to high downstream accuracies. In this …
C Yaras, P Wang, Z Zhu… - Advances in neural …, 2022 - proceedings.neurips.cc
When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called" neural collapse'" phenomenon. More …
An ideal learned representation should display transferability and robustness. Supervised contrastive learning (SupCon) is a promising method for training accurate models, but …
Abstract 3D point-clouds and 2D images are different visual representations of the physical world. While human vision can understand both representations, computer vision models …
Understanding the learned representation and underlying mechanisms of Self-Supervised Learning (SSL) often poses a challenge. In this paper, we 'reverse engineer'SSL, conducting …