Sparse superposition codes for Gaussian vector quantization

I Kontoyiannis, KR Rad… - 2010 IEEE Information …, 2010 - ieeexplore.ieee.org
2010 IEEE Information Theory Workshop on Information Theory (ITW …, 2010ieeexplore.ieee.org
A new method is presented for the optimal or near-optimal quantization of memoryless
Gaussian data. The basic construction of the codebook is motivated by related ideas in the
statistical framework of sparse recovery in linear regression. Similarly, the encoding is
performed by a convex-hull iterative algorithm. Preliminary theoretical results establish the
optimality of the proposed algorithm for a certain range of the parameter values.
Experimental results demonstrate these theoretical findings on simulated data. The …
A new method is presented for the optimal or near-optimal quantization of memoryless Gaussian data. The basic construction of the codebook is motivated by related ideas in the statistical framework of sparse recovery in linear regression. Similarly, the encoding is performed by a convex-hull iterative algorithm. Preliminary theoretical results establish the optimality of the proposed algorithm for a certain range of the parameter values. Experimental results demonstrate these theoretical findings on simulated data. The performance of the proposed algorithm is compared with that of trellis-coded quantization and of the recently proposed algorithms in and. Depending on the choice of the relevant design parameters, the complexity of the encoding algorithm varies, and is generally polynomial in the data block-length. The present results are, in part, motivated by the analogous channel coding results of.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
查找
获取 PDF 文件
引用
References