作者
Suraj Srinivas, Akshayvarun Subramanya, R Venkatesh Babu
发表日期
2017
研讨会论文
CVPR Embedded Vision Workshop
页码范围
138-145
简介
The emergence of Deep neural networks has seen human-level performance on large scale computer vision tasks such as image classification. However these deep networks typically contain large amount of parameters due to dense matrix multiplications and convolutions. As a result, these architectures are highly memory intensive, making them less suitable for embedded vision applications. Sparse Computations are known to be much more memory efficient. In this work, we train and build neural networks which implicitly use sparse computations. We introduce additional gate variables to perform parameter selection and show that this is equivalent to using a spike-and-slab prior. We experimentally validate our method on both small and large networks which result in highly sparse neural network models.
引用总数
20172018201920202021202220232024412203845384626
学术搜索中的文章
S Srinivas, A Subramanya, R Venkatesh Babu - Proceedings of the IEEE conference on computer …, 2017