Mobilevos: Real-time video object segmentation contrastive learning meets knowledge distillation

R Miles, MK Yucel, B Manganelli… - Proceedings of the …, 2023 - openaccess.thecvf.com
This paper tackles the problem of semi-supervised video object segmentation on resource-
constrained devices, such as mobile phones. We formulate this problem as a distillation …

Compressing visual-linguistic model via knowledge distillation

Z Fang, J Wang, X Hu, L Wang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few
aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively …

Contrastive deep supervision

L Zhang, X Chen, J Zhang, R Dong, K Ma - European Conference on …, 2022 - Springer
The success of deep learning is usually accompanied by the growth in neural network
depth. However, the traditional training method only supervises the neural network at its last …

On representation knowledge distillation for graph neural networks

CK Joshi, F Liu, X Xun, J Lin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Knowledge distillation (KD) is a learning paradigm for boosting resource-efficient graph
neural networks (GNNs) using more expressive yet cumbersome teacher models. Past work …

F3-Net: Multiview scene matching for drone-based geo-localization

B Sun, G Liu, Y Yuan - IEEE Transactions on Geoscience and …, 2023 - ieeexplore.ieee.org
Scene matching involves establishing a mapping relationship between heterogeneous
images, which is crucial for drone visual geo-localization. However, it poses a significant …

Network binarization via contrastive learning

Y Shang, D Xu, Z Zong, L Nie, Y Yan - European Conference on Computer …, 2022 - Springer
Neural network binarization accelerates deep models by quantizing their weights and
activations into 1-bit. However, there is still a huge performance gap between Binary Neural …

Modality-aware contrastive instance learning with self-distillation for weakly-supervised audio-visual violence detection

J Yu, J Liu, Y Cheng, R Feng, Y Zhang - Proceedings of the 30th ACM …, 2022 - dl.acm.org
Weakly-supervised audio-visual violence detection aims to distinguish snippets containing
multimodal violence events with video-level labels. Many prior works perform audio-visual …

Knowledge condensation distillation

C Li, M Lin, Z Ding, N Lin, Y Zhuang, Y Huang… - … on Computer Vision, 2022 - Springer
Abstract Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher
network to strengthen a smaller student. Existing methods focus on excavating the …

Pixel distillation: Cost-flexible distillation across image sizes and heterogeneous networks

G Guo, D Zhang, L Han, N Liu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Previous knowledge distillation (KD) methods mostly focus on compressing network
architectures, which is not thorough enough in deployment as some costs like transmission …

Multimodal mutual information maximization: A novel approach for unsupervised deep cross-modal hashing

T Hoang, TT Do, TV Nguyen… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
In this article, we adopt the maximizing mutual information (MI) approach to tackle the
problem of unsupervised learning of binary hash codes for efficient cross-modal retrieval …