Improving deep regression with ordinal entropy

S Zhang, L Yang, MB Mi, X Zheng, A Yao - arXiv preprint arXiv …, 2023 - arxiv.org
In computer vision, it is often observed that formulating regression problems as a
classification task often yields better performance. We investigate this curious phenomenon …

Deep learning on small datasets without pre-training using cosine loss

B Barz, J Denzler - Proceedings of the IEEE/CVF winter …, 2020 - openaccess.thecvf.com
Two things seem to be indisputable in the contemporary deep learning discourse: 1. The
categorical cross-entropy loss after softmax activation is the method of choice for …

Orthogonal projection loss

K Ranasinghe, M Naseer, M Hayat… - Proceedings of the …, 2021 - openaccess.thecvf.com
Deep neural networks have achieved remarkable performance on a range of classification
tasks, with softmax cross-entropy (CE) loss emerging as the de-facto objective function. The …

Overfitting mechanism and avoidance in deep neural networks

S Salman, X Liu - arXiv preprint arXiv:1901.06566, 2019 - arxiv.org
Assisted by the availability of data and high performance computing, deep learning
techniques have achieved breakthroughs and surpassed human performance empirically in …

Taming the cross entropy loss

M Martinez, R Stiefelhagen - … , GCPR 2018, Stuttgart, Germany, October 9 …, 2019 - Springer
Abstract We present the Tamed Cross Entropy (TCE) loss function, a robust derivative of the
standard Cross Entropy (CE) loss used in deep learning for classification tasks. However …

Ole: Orthogonal low-rank embedding-a plug and play geometric loss for deep learning

J Lezama, Q Qiu, P Musé… - Proceedings of the IEEE …, 2018 - openaccess.thecvf.com
Deep neural networks trained using a softmax layer at the top and the cross-entropy loss are
ubiquitous tools for image classification. Yet, this does not naturally enforce intra-class …

Rank-n-contrast: learning continuous representations for regression

K Zha, P Cao, J Son, Y Yang… - Advances in Neural …, 2024 - proceedings.neurips.cc
Deep regression models typically learn in an end-to-end fashion without explicitly
emphasizing a regression-aware representation. Consequently, the learned representations …

Large-margin softmax loss for convolutional neural networks

W Liu, Y Wen, Z Yu, M Yang - arXiv preprint arXiv:1612.02295, 2016 - arxiv.org
Cross-entropy loss together with softmax is arguably one of the most common used
supervision components in convolutional neural networks (CNNs). Despite its simplicity …

AMC-loss: Angular margin contrastive loss for improved explainability in image classification

H Choi, A Som, P Turaga - … of the IEEE/CVF conference on …, 2020 - openaccess.thecvf.com
Deep-learning architectures for classification problems involve the cross-entropy loss
sometimes assisted with auxiliary loss functions like center loss, contrastive loss and triplet …

The simpler the better: An entropy-based importance metric to reduce neural networks' depth

V Quétu, Z Liao, E Tartaglione - Joint European Conference on Machine …, 2024 - Springer
While deep neural networks are highly effective at solving complex tasks, large pre-trained
models are commonly employed even to solve consistently simpler downstream tasks …