Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks

L Wang, KJ Yoon - IEEE transactions on pattern analysis and …, 2021 - ieeexplore.ieee.org
Deep neural models, in recent years, have been successful in almost every field, even
solving the most complex problem statements. However, these models are huge in size with …

[HTML][HTML] Applications and techniques for fast machine learning in science

AMC Deiana, N Tran, J Agar, M Blott… - Frontiers in big …, 2022 - frontiersin.org
In this community review report, we discuss applications and techniques for fast machine
learning (ML) in science—the concept of integrating powerful ML methods into the real-time …

Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation

N Ruiz, Y Li, V Jampani, Y Pritch… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large text-to-image models achieved a remarkable leap in the evolution of AI, enabling high-
quality and diverse synthesis of images from a given text prompt. However, these models …

Multi-concept customization of text-to-image diffusion

N Kumari, B Zhang, R Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
While generative models produce high-quality images of concepts learned from a large-
scale database, a user often wishes to synthesize instantiations of their own concepts (for …

Ablating concepts in text-to-image diffusion models

N Kumari, B Zhang, SY Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful
compositional ability. However, these models are typically trained on an enormous amount …

Stylegan-nada: Clip-guided domain adaptation of image generators

R Gal, O Patashnik, H Maron, AH Bermano… - ACM Transactions on …, 2022 - dl.acm.org
Can a generative model be trained to produce images from a specific domain, guided only
by a text prompt, without seeing any image? In other words: can an image generator be …

Training generative adversarial networks with limited data

T Karras, M Aittala, J Hellsten, S Laine… - Advances in neural …, 2020 - proceedings.neurips.cc
Training generative adversarial networks (GAN) using too little data typically leads to
discriminator overfitting, causing training to diverge. We propose an adaptive discriminator …

Gan prior embedded network for blind face restoration in the wild

T Yang, P Ren, X Xie, L Zhang - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Blind face restoration (BFR) from severely degraded face images in the wild is a very
challenging problem. Due to the high illness of the problem and the complex unknown …

Differentiable augmentation for data-efficient gan training

S Zhao, Z Liu, J Lin, JY Zhu… - Advances in neural …, 2020 - proceedings.neurips.cc
The performance of generative adversarial networks (GANs) heavily deteriorates given a
limited amount of training data. This is mainly because the discriminatorsis memorizing the …

Few-shot image generation via cross-domain correspondence

U Ojha, Y Li, J Lu, AA Efros, YJ Lee… - Proceedings of the …, 2021 - openaccess.thecvf.com
Training generative models, such as GANs, on a target domain containing limited examples
(eg, 10) can easily result in overfitting. In this work, we seek to utilize a large source domain …