Rank-n-contrast: learning continuous representations for regression

K Zha, P Cao, J Son, Y Yang… - Advances in Neural …, 2024 - proceedings.neurips.cc
Deep regression models typically learn in an end-to-end fashion without explicitly
emphasizing a regression-aware representation. Consequently, the learned representations …

Supervised contrastive regression

K Zha, P Cao, Y Yang, D Katabi - arXiv preprint arXiv:2210.01189, 2022 - arxiv.org
Deep regression models typically learn in an end-to-end fashion and do not explicitly try to
learn a regression-aware representation. Their representations tend to be fragmented and …

A comprehensive analysis of deep regression

S Lathuilière, P Mesejo… - IEEE transactions on …, 2019 - ieeexplore.ieee.org
Deep learning revolutionized data science, and recently its popularity has grown
exponentially, as did the amount of papers employing deep networks. Vision tasks, such as …

Semi-supervised contrastive learning for deep regression with ordinal rankings from spectral seriation

W Dai, Y Du, H Bai, KT Cheng… - Advances in Neural …, 2023 - proceedings.neurips.cc
Contrastive learning methods can be applied to deep regression by enforcing label distance
relationships in feature space. However, these methods are limited to labeled data only …

Improving deep regression with ordinal entropy

S Zhang, L Yang, MB Mi, X Zheng, A Yao - arXiv preprint arXiv …, 2023 - arxiv.org
In computer vision, it is often observed that formulating regression problems as a
classification task often yields better performance. We investigate this curious phenomenon …

Bi-tuning of pre-trained representations

J Zhong, X Wang, Z Kou, J Wang, M Long - arXiv preprint arXiv …, 2020 - arxiv.org
It is common within the deep learning community to first pre-train a deep neural network from
a large-scale dataset and then fine-tune the pre-trained model to a specific downstream …

Blessing of depth in linear regression: Deeper models have flatter landscape around the true solution

J Ma, S Fattahi - Advances in Neural Information Processing …, 2022 - proceedings.neurips.cc
This work characterizes the effect of depth on the optimization landscape of linear
regression, showing that, despite their nonconvexity, deeper models have more desirable …

Investigating power laws in deep representation learning

A Ghosh, AK Mondal, KK Agrawal… - arXiv preprint arXiv …, 2022 - arxiv.org
Representation learning that leverages large-scale labelled datasets, is central to recent
progress in machine learning. Access to task relevant labels at scale is often scarce or …

No one representation to rule them all: Overlapping features of training methods

R Gontijo-Lopes, Y Dauphin, ED Cubuk - arXiv preprint arXiv:2110.12899, 2021 - arxiv.org
Despite being able to capture a range of features of the data, high accuracy models trained
with supervision tend to make similar predictions. This seemingly implies that high …

Learning de-biased representations with biased representations

H Bahng, S Chun, S Yun, J Choo… - … on Machine Learning, 2020 - proceedings.mlr.press
Many machine learning algorithms are trained and evaluated by splitting data from a single
source into training and test sets. While such focus on in-distribution learning scenarios has …