作者
Mario Drumond, Tao Lin, Martin Jaggi, Babak Falsafi
发表日期
2018/4/4
期刊
NeurIPS 2018 - Advances in Neural Information Processing Systems, 2018
简介
The wide adoption of DNNs has given birth to unrelenting computing requirements, forcing datacenter operators to adopt domain-specific accelerators to train them. These accelerators typically employ densely packed full-precision floating-point arithmetic to maximize performance per area. Ongoing research efforts seek to further increase that performance density by replacing floating-point with fixed-point arithmetic. However, a significant roadblock for these attempts has been fixed point's narrow dynamic range, which is insufficient for DNN training convergence. We identify block floating point (BFP) as a promising alternative representation since it exhibits wide dynamic range and enables the majority of DNN operations to be performed with fixed-point logic. Unfortunately, BFP alone introduces several limitations that preclude its direct applicability. In this work, we introduce HBFP, a hybrid BFP-FP approach, which performs all dot products in BFP and other operations in floating point. HBFP delivers the best of both worlds: the high accuracy of floating point at the superior hardware density of fixed point. For a wide variety of models, we show that HBFP matches floating point's accuracy while enabling hardware implementations that deliver up to 8.5 x higher throughput.
引用总数
2018201920202021202220232024181919263212
学术搜索中的文章
M Drumond, T Lin, M Jaggi, B Falsafi - Advances in Neural Information Processing Systems, 2018
M Drumond, T Lin, M Jaggi, B Falsafi - arXiv preprint arXiv:1804.01526, 2018
MP Drumond Lages De Oliveira, T Lin, M Jaggi… - 2018