作者
Nancy Nayak, Thulasi Tholeti, Muralikrishnan Srinivasan, Sheetal Kalyani
发表日期
2020/3
期刊
arXiv preprint arXiv:2003.09446
简介
This paper introduces an incremental training framework for compressing popular Deep Neural Network (DNN) based unfolded multiple-input-multiple-output (MIMO) detection algorithms like DetNet. The idea of incremental training is explored to select the optimal depth while training. To reduce the computation requirements or the number of FLoating point OPerations (FLOPs) and enforce sparsity in weights, the concept of structured regularization is explored using group LASSO and sparse group LASSO. Our methods lead to an astounding reduction in memory requirement and reduction in FLOPs when compared with DetNet without compromising on BER performance.
引用总数
学术搜索中的文章