Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks

L Wang, KJ Yoon - IEEE transactions on pattern analysis and …, 2021 - ieeexplore.ieee.org
Deep neural models, in recent years, have been successful in almost every field, even
solving the most complex problem statements. However, these models are huge in size with …

An overview of neural network compression

JO Neill - arXiv preprint arXiv:2006.03669, 2020 - arxiv.org
Overparameterized networks trained to convergence have shown impressive performance
in domains such as computer vision and natural language processing. Pushing state of the …

Knowledge distillation: A survey

J Gou, B Yu, SJ Maybank, D Tao - International Journal of Computer Vision, 2021 - Springer
In recent years, deep neural networks have been successful in both industry and academia,
especially for computer vision tasks. The great success of deep learning is mainly due to its …

Towards data-free model stealing in a hard label setting

S Sanyal, S Addepalli, RV Babu - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Machine learning models deployed as a service (MLaaS) are susceptible to model
stealing attacks, where an adversary attempts to steal the model within a restricted access …

Comparing kullback-leibler divergence and mean squared error loss in knowledge distillation

T Kim, J Oh, NY Kim, S Cho, SY Yun - arXiv preprint arXiv:2105.08919, 2021 - arxiv.org
Knowledge distillation (KD), transferring knowledge from a cumbersome teacher model to a
lightweight student model, has been investigated to design efficient neural architectures …

Gan compression: Efficient architectures for interactive conditional gans

M Li, J Lin, Y Ding, Z Liu, JY Zhu… - Proceedings of the …, 2020 - openaccess.thecvf.com
Abstract Conditional Generative Adversarial Networks (cGANs) have enabled controllable
image synthesis for many computer vision and graphics applications. However, recent …

Stylegan2 distillation for feed-forward image manipulation

Y Viazovetskyi, V Ivashkin, E Kashin - … , Glasgow, UK, August 23–28, 2020 …, 2020 - Springer
StyleGAN2 is a state-of-the-art network in generating realistic images. Besides, it was
explicitly trained to have disentangled directions in latent space, which allows efficient …

Mi-gan: A simple baseline for image inpainting on mobile devices

A Sargsyan, S Navasardyan, X Xu… - Proceedings of the …, 2023 - openaccess.thecvf.com
In recent years, many deep learning based image inpainting methods have been developed
by the research community. Some of those methods have shown impressive image …

Efficient spatially sparse inference for conditional gans and diffusion models

M Li, J Lin, C Meng, S Ermon… - Advances in neural …, 2022 - proceedings.neurips.cc
During image editing, existing deep generative models tend to re-synthesize the entire
output from scratch, including the unedited regions. This leads to a significant waste of …

Anycost gans for interactive image synthesis and editing

J Lin, R Zhang, F Ganz, S Han… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Generative adversarial networks (GANs) have enabled photorealistic image synthesis and
editing. However, due to the high computational cost of large-scale generators (eg …