Neural architecture search (NAS) is a promising technique to design efficient and high- performance deep neural networks (DNNs). As the performance requirements of ML …
Graph convolutional networks (GCNs) have shown remarkable learning capabilities when processing graph-structured data found inherently in many application areas. GCNs …
In this paper, a mathematical model for obtaining energy consumption of IMC architectures is constructed. This model provides energy estimation based on the distribution of a specific …
Abstract In-memory computing (IMC)-based hardware reduces latency as well as energy consumption for compute-intensive machine learning (ML) applications. Till date, several …
Abstract In-memory computing (IMC)-based hardware reduces latency and energy consumption for compute-intensive machine learning (ML) applications. Several …
Deep neural networks (DNNs), as a main-stream algorithm for various AI tasks, achieve higher accuracy at the cost of increased computational complexity and model size, posing …
Hardware accelerators for deep neural networks (DNNs) exhibit high volume of on-chip communication due to deep and dense connections. State-of-the-art interconnect …
Hardware accelerators for deep neural networks (DNNs) exhibit high volume of on-chip communication due to deep and dense connections. State-of-the-art interconnect …
K Bukkapatnam, J Singh - NeuroQuantology, 2022 - search.proquest.com
Recently, embedded devices are playing a prominent role in digital signal processors, multi- core systems, and hybrid processors. The performance of embedded devices is purely …