A comprehensive survey on test-time adaptation under distribution shifts

J Liang, R He, T Tan - International Journal of Computer Vision, 2024 - Springer
Abstract Machine learning methods strive to acquire a robust model during the training
process that can effectively generalize to test samples, even in the presence of distribution …

Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks

L Wang, KJ Yoon - IEEE transactions on pattern analysis and …, 2021 - ieeexplore.ieee.org
Deep neural models, in recent years, have been successful in almost every field, even
solving the most complex problem statements. However, these models are huge in size with …

Fine-tuning global model via data-free knowledge distillation for non-iid federated learning

L Zhang, L Shen, L Ding, D Tao… - Proceedings of the …, 2022 - openaccess.thecvf.com
Federated Learning (FL) is an emerging distributed learning paradigm under privacy
constraint. Data heterogeneity is one of the main challenges in FL, which results in slow …

Source-free domain adaptation for semantic segmentation

Y Liu, W Zhang, J Wang - … of the IEEE/CVF Conference on …, 2021 - openaccess.thecvf.com
Abstract Unsupervised Domain Adaptation (UDA) can tackle the challenge that
convolutional neural network (CNN)-based approaches for semantic segmentation heavily …

Distilling object detectors via decoupled features

J Guo, K Han, Y Wang, H Wu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Abstract Knowledge distillation is a widely used paradigm for inheriting information from a
complicated teacher network to a compact student network and maintaining the strong …

Data-free model extraction

JB Truong, P Maini, RJ Walls… - Proceedings of the …, 2021 - openaccess.thecvf.com
Current model extraction attacks assume that the adversary has access to a surrogate
dataset with characteristics similar to the proprietary data used to train the victim model. This …

Towards data-free model stealing in a hard label setting

S Sanyal, S Addepalli, RV Babu - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Machine learning models deployed as a service (MLaaS) are susceptible to model
stealing attacks, where an adversary attempts to steal the model within a restricted access …

Towards efficient data free black-box adversarial attack

J Zhang, B Li, J Xu, S Wu, S Ding… - Proceedings of the …, 2022 - openaccess.thecvf.com
Classic black-box adversarial attacks can take advantage of transferable adversarial
examples generated by a similar substitute model to successfully fool the target model …

Spot-adaptive knowledge distillation

J Song, Y Chen, J Ye, M Song - IEEE Transactions on Image …, 2022 - ieeexplore.ieee.org
Knowledge distillation (KD) has become a well established paradigm for compressing deep
neural networks. The typical way of conducting knowledge distillation is to train the student …

Synthesizing informative training samples with gan

B Zhao, H Bilen - arXiv preprint arXiv:2204.07513, 2022 - arxiv.org
Remarkable progress has been achieved in synthesizing photo-realistic images with
generative adversarial networks (GANs). Recently, GANs are utilized as the training sample …