作者
Sung Kim, Patrick Howe, Thierry Moreau, Armin Alaghi, Luis Ceze, Visvesh S Sathe
发表日期
2018/6/7
期刊
IEEE Transactions on Circuits and Systems I: Regular Papers
卷号
65
期号
12
页码范围
4285-4298
出版商
IEEE
简介
As a result of the increasing demand for deep neural network (DNN)-based services, efforts to develop hardware accelerators for DNNs are growing rapidly. However, while highly efficient accelerators on convolutional DNNs (ConvDNNs) have been developed, less progress has been made with regards to fully-connected DNNs. Based on analysis of bit-level SRAM errors, we propose memory adaptive training with in-situ canaries (MATIC), a methodology that enables aggressive voltage scaling of accelerator weight memories to improve the energyefficiency of DNN accelerators. To enable accurate operation with voltage overscaling, MATIC combines characteristics of SRAM bit failures with the error resilience of neural networks in a memory-adaptive training (MAT) process. Furthermore, PVT-related voltage margins are eliminated using bit-cells from synaptic weights as in-situ canaries to track runtime …
引用总数
20182019202020212022202320243812135113
学术搜索中的文章
S Kim, P Howe, T Moreau, A Alaghi, L Ceze, VS Sathe - IEEE Transactions on Circuits and Systems I: Regular …, 2018