作者
Babak Ehteshami Bejnordi, Tijmen Blankevoort, Max Welling
发表日期
2020
研讨会论文
Proceedings of the eights International Conference on Learning Representations (ICLR 2020).
简介
We present a method that trains large capacity neural networks with significantly improved accuracy and lower dynamic computational cost. We achieve this by gating the deep-learning architecture on a fine-grained-level. Individual convolutional maps are turned on/off conditionally on features in the network. To achieve this, we introduce a new residual block architecture that gates convolutional channels in a fine-grained manner. We also introduce a generally applicable tool $ batch $-$ shaping $ that matches the marginal aggregate posteriors of features in a neural network to a pre-specified prior distribution. We use this novel technique to force gates to be more conditional on the data. We present results on CIFAR-10 and ImageNet datasets for image classification, and Cityscapes for semantic segmentation. Our results show that our method can slim down large architectures conditionally, such that the average …
引用总数
2019202020212022202320242919202112
学术搜索中的文章
BE Bejnordi, T Blankevoort, M Welling - arXiv preprint arXiv:1907.06627, 2019
B Ehteshami Bejnordi, T Blankevoort, M Welling - arXiv e-prints, 2019