作者
Ioannis Kansizoglou, Loukas Bampis, Antonios Gasteratos
发表日期
2022/3/8
期刊
IEEE Transactions on Neural Networks and Learning Systems
卷号
34
期号
11
页码范围
8815-8824
出版商
IEEE
简介
The exploitation of deep neural networks (DNNs) as descriptors in feature learning challenges enjoys apparent popularity over the past few years. The above tendency focuses on the development of effective loss functions that ensure both high feature discrimination among different classes, as well as low geodesic distance between the feature vectors of a given class. The vast majority of the contemporary works rely their formulation on an empirical assumption about the feature space of a network’s last hidden layer, claiming that the weight vector of a class accounts for its geometrical center in the studied space. This article at hand follows a theoretical approach and indicates that the aforementioned hypothesis is not exclusively met. This fact raises stability issues regarding the training procedure of a DNN, as shown in our experimental study. Consequently, a specific symmetry is proposed and studied both …
引用总数
20212022202320241662
学术搜索中的文章
I Kansizoglou, L Bampis, A Gasteratos - IEEE Transactions on Neural Networks and Learning …, 2022