作者
Netanel Raviv, Pulakesh Upadhyaya, Siddharth Jain, Jehoshua Bruck, Anxiao Andrew Jiang
发表日期
2020
简介
I. INTRODUCTION Deep Neural Networks (DNNs) are the revolutionary force supporting AI today. When DNNs are implemented in hardware, their weights are often stored in nonvolatile memory devices, including memristor, phasechange memory (PCM), flash memory, CBRAM, spin devices, etc. Hardware implementation can help DNNs be used ubiquitously. However, numerous types of errors will appear, making their reliability a critical challenge, which is especially significant for long-term usage of AI systems due to error accumulation and becomes increasingly severe as devices become smaller. To conquer the challenge, new robust DNN architectures are needed. There exist a number of previous works on building robust neural networks. A typical approach is to compute the same result multiple times (sometimes by replicating each layer several times), and aggregate the results to handle errors in them [4]. Other techniques also exist, including retraining and providing statistical frameworks for testing fault tolerance of DNNs [1, 2]. In this paper, we present a new scheme for robust DNNs called Coded Deep Neural Network (CodNN). It transforms the internal structure of DNNs by adding redundant neurons and edges to increase its reliability. The added redundancy can be seen as a new type of error-correcting codes customized for machine learning. Consider a DNN, which usually has many layers of neurons. Consider two groups of neurons in two adjacent layers, as shown in Fig. 1a. The outputs of the n neurons in the (l− 1)-th layer vl− 1, 1, vl− 1, 2,···, vl− 1, n are transmitted via the edges to the k neurons in the l-th layer vl, 1, vl, 2,···, vl, k …
引用总数
202020212022202311
学术搜索中的文章
N Raviv, P Upadhyaya, S Jain, J Bruck, A Jiang - Proceedings of the Non Volatile Memories Workshop …, 2020