SIAM: Chiplet-based scalable in-memory acceleration with mesh for deep neural networks

G Krishnan, SK Mandal, M Pannala… - ACM Transactions on …, 2021 - dl.acm.org
In-memory computing (IMC) on a monolithic chip for deep learning faces dramatic
challenges on area, yield, and on-chip interconnection cost due to the ever-increasing …

FLASH: F ast Neura l A rchitecture S earch with H ardware Optimization

G Li, SK Mandal, UY Ogras, R Marculescu - ACM Transactions on …, 2021 - dl.acm.org
Neural architecture search (NAS) is a promising technique to design efficient and high-
performance deep neural networks (DNNs). As the performance requirements of ML …

COIN: Communication-aware in-memory acceleration for graph convolutional networks

SK Mandal, G Krishnan, AA Goksoy… - IEEE Journal on …, 2022 - ieeexplore.ieee.org
Graph convolutional networks (GCNs) have shown remarkable learning capabilities when
processing graph-structured data found inherently in many application areas. GCNs …

An Energy Consumption Model for SRAM-Based In-Memory-Computing Architectures

B Akgül, TC Karalar - Electronics, 2024 - mdpi.com
In this paper, a mathematical model for obtaining energy consumption of IMC architectures
is constructed. This model provides energy estimation based on the distribution of a specific …

In-Memory Computing for AI Accelerators: Challenges and Solutions

G Krishnan, SK Mandal, C Chakrabarti, J Seo… - … Machine Learning for …, 2023 - Springer
Abstract In-memory computing (IMC)-based hardware reduces latency as well as energy
consumption for compute-intensive machine learning (ML) applications. Till date, several …

[HTML][HTML] End-to-End Benchmarking of Chiplet-Based In-Memory Computing

G Krishnan, SK Mandal, AA Goksoy… - Neuromorphic …, 2023 - intechopen.com
Abstract In-memory computing (IMC)-based hardware reduces latency and energy
consumption for compute-intensive machine learning (ML) applications. Several …

Energy-Efficient In-Memory Acceleration of Deep Neural Networks Through a Hardware-Software Co-Design Approach

G Krishnan - 2022 - search.proquest.com
Deep neural networks (DNNs), as a main-stream algorithm for various AI tasks, achieve
higher accuracy at the cost of increased computational complexity and model size, posing …

[图书][B] Energy-Efficient Communication Architectures for Beyond Non-Neumann AI Accelerators: Design and Analysis

SK Mandal - 2022 - search.proquest.com
Hardware accelerators for deep neural networks (DNNs) exhibit high volume of on-chip
communication due to deep and dense connections. State-of-the-art interconnect …

[PDF][PDF] Network-on-Chip (NoC) Performance Analysis and Optimization for Deep Learning Applications

S Mandal - 2021 - minds.wisconsin.edu
Hardware accelerators for deep neural networks (DNNs) exhibit high volume of on-chip
communication due to deep and dense connections. State-of-the-art interconnect …

Implementation of Data Management Engine-based Network on Chip with Parallel Memory Allocation

K Bukkapatnam, J Singh - NeuroQuantology, 2022 - search.proquest.com
Recently, embedded devices are playing a prominent role in digital signal processors, multi-
core systems, and hybrid processors. The performance of embedded devices is purely …