In recent years, many accelerators have been proposed to efficiently process sparse tensor algebra applications (eg, sparse neural networks). However, these proposals are single …
Sparsity is a growing trend in modern DNN models. Existing Sparse-Sparse Matrix Multiplication (SpMSpM) accelerators are tailored to a particular SpMSpM dataflow (ie, Inner …
Over the past few years, the explosion in sparse tensor algebra workloads has led to a corresponding rise in domain-specific accelerators to service them. Due to the irregularity …
Nowadays, always-on intelligent and self-powered visual perception systems have gained considerable attention and are widely used. However, capturing data and analyzing it via a …
Running multiple deep neural networks (DNNs) in parallel has become an emerging workload in both edge devices, such as mobile phones where multiple tasks serve a single …
This work paves the way to realize a processing-in-pixel (PIP) accelerator based on a multilevel HfOx resistive random access memory (RRAM) as a flexible, energy-efficient, and …
Over the past decade, a massive proliferation of machine learning algorithms has emerged, from applications for surveillance to self-driving cars. The turning point occurred with the …
The combination of pre-trained models and task-specific fine-tuning schemes, such as BERT, has achieved great success in various natural language processing (NLP) tasks …
B Rashidi, C Gao, S Lu, Z Wang, C Zhou… - Proceedings of the 56th …, 2023 - dl.acm.org
Specialized hardware has become an indispensable component to deep neural network (DNN) acceleration. To keep up with the rapid evolution of neural networks, holistic and …