Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks

N Suda, V Chandra, G Dasika, A Mohanty… - Proceedings of the …, 2016 - dl.acm.org
Proceedings of the 2016 ACM/SIGDA international symposium on field …, 2016dl.acm.org
Convolutional Neural Networks (CNNs) have gained popularity in many computer vision
applications such as image classification, face detection, and video analysis, because of
their ability to train and classify with high accuracy. Due to multiple convolution and fully-
connected layers that are compute-/memory-intensive, it is difficult to perform real-time
classification with low power consumption on today? s computing systems. FPGAs have
been widely explored as hardware accelerators for CNNs because of their reconfigurability …
Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications such as image classification, face detection, and video analysis, because of their ability to train and classify with high accuracy. Due to multiple convolution and fully-connected layers that are compute-/memory-intensive, it is difficult to perform real-time classification with low power consumption on today?s computing systems. FPGAs have been widely explored as hardware accelerators for CNNs because of their reconfigurability and energy efficiency, as well as fast turn-around-time, especially with high-level synthesis methodologies. Previous FPGA-based CNN accelerators, however, typically implemented generic accelerators agnostic to the CNN configuration, where the reconfigurable capabilities of FPGAs are not fully leveraged to maximize the overall system throughput. In this work, we present a systematic design space exploration methodology to maximize the throughput of an OpenCL-based FPGA accelerator for a given CNN model, considering the FPGA resource constraints such as on-chip memory, registers, computational resources and external memory bandwidth. The proposed methodology is demonstrated by optimizing two representative large-scale CNNs, AlexNet and VGG, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources. We achieve a peak performance of 136.5 GOPS for convolution operation, and 117.8 GOPS for the entire VGG network that performs ImageNet classification on P395-D8 board.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果