作者
Hsin-Yu Ting, Tootiya Giyahchi, Ardalan Amiri Sani, Eli Bozorgzadeh
发表日期
2020/7/6
研讨会论文
2020 IEEE 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP)
页码范围
197-204
出版商
IEEE
简介
Edge computing can potentially provide abundant processing resources for compute-intensive applications while bringing services close to end devices. With the increasing demands for computing acceleration at the edge, FPGAs have been deployed to provide custom deep neural network accelerators. This paper explores a DNN accelerator sharing system at the edge FPGA device, that serves various DNN applications from multiple end devices simultaneously. The proposed SharedDNN/PlanAhead policy exploits the regularity among requests for various DNN accelerators and determines which accelerator to allocate for each request and in what order to respond to the requests that achieve maximum responsiveness for a queue of acceleration requests. Our results show overall 2. 20x performance gain at best and utilization improvement by reducing up to 27% of DNN library usage while staying within the …
引用总数
2020202120222023202417931
学术搜索中的文章
HY Ting, T Giyahchi, AA Sani, E Bozorgzadeh - 2020 IEEE 31st International Conference on …, 2020