Multi-view contrastive learning for online knowledge distillation

C Yang, Z An, Y Xu - ICASSP 2021-2021 IEEE International …, 2021 - ieeexplore.ieee.org
Previous Online Knowledge Distillation (OKD) often carries out mutually exchanging
probability distributions, but neglects the useful representational knowledge. We there-fore …

When To Grow? A Fitting Risk-Aware Policy for Layer Growing in Deep Neural Networks

H Wu, W Wang, T Malepathirana… - Proceedings of the …, 2024 - ojs.aaai.org
Neural growth is the process of growing a small neural network to a large network and has
been utilized to accelerate the training of deep neural networks. One crucial aspect of neural …

Remote sensing image scene classification by multiple granularity semantic learning

W Guo, S Li, J Yang, Z Zhou, Y Liu, J Lu… - IEEE Journal of …, 2022 - ieeexplore.ieee.org
Remote sensing image scene classification faces challenges, such as the difference in
semantic granularity of different scene categories and the imbalance of the number of …

Towards efficient convolutional network models with filter distribution templates

R Izquierdo-Cordova, W Mayol-Cuevas - arXiv preprint arXiv:2104.08446, 2021 - arxiv.org
Increasing number of filters in deeper layers when feature maps are decreased is a widely
adopted pattern in convolutional network design. It can be found in classical CNN …