Self-supervised generative adversarial compression

C Yu, J Pool - Advances in Neural Information Processing …, 2020 - proceedings.neurips.cc
Advances in Neural Information Processing Systems, 2020proceedings.neurips.cc
Deep learning's success has led to larger and larger models to handle more and more
complex tasks; trained models often contain millions of parameters. These large models are
compute-and memory-intensive, which makes it a challenge to deploy them with latency,
throughput, and storage constraints. Some model compression methods have been
successfully applied to image classification and detection or language models, but there has
been very little work compressing generative adversarial networks (GANs) performing …
Abstract
Deep learning’s success has led to larger and larger models to handle more and more complex tasks; trained models often contain millions of parameters. These large models are compute-and memory-intensive, which makes it a challenge to deploy them with latency, throughput, and storage constraints. Some model compression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning and knowledge distillation, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different compression granularities.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果