Forms: Fine-grained polarized reram-based in-situ computation for mixed-signal dnn accelerator

G Yuan, P Behnam, Z Li, A Shafiee… - 2021 ACM/IEEE 48th …, 2021 - ieeexplore.ieee.org
Recent work demonstrated the promise of using resistive random access memory (ReRAM)
as an emerging technology to perform inherently parallel analog domain in-situ matrix …

Look-up-table based processing-in-memory architecture with programmable precision-scaling for deep learning applications

PR Sutradhar, S Bavikadi, M Connolly… - … on Parallel and …, 2021 - ieeexplore.ieee.org
Processing in memory (PIM) architecture, with its ability to perform ultra-low-latency parallel
processing, is regarded as a more suitable alternative to von Neumann computing …

Reconfigurable fet approximate computing-based accelerator for deep learning applications

R Saravanan, S Bavikadi, S Rai… - … on Circuits and …, 2023 - ieeexplore.ieee.org
Reconfigurable nanotechnologies such as Silicon Nanowire Field Effect Transistors (FETs)
serve as a promising technology that not only facilitates lower power consumption but also …

Clicktrain: Efficient and accurate end-to-end deep learning training via fine-grained architecture-preserving pruning

C Zhang, G Yuan, W Niu, J Tian, S Jin… - Proceedings of the …, 2021 - dl.acm.org
Convolutional neural networks (CNNs) are becoming increasingly deeper, wider, and non-
linear because of the growing demand on prediction accuracy and analysis quality. The …

Heterogeneous multi-functional look-up-table-based processing-in-memory architecture for deep learning acceleration

S Bavikadi, PR Sutradhar, A Ganguly… - … on Quality Electronic …, 2023 - ieeexplore.ieee.org
Emerging applications including deep neural networks (DNNs) and convolutional neural
networks (CNNs) employ massive amounts of data to perform computations and data …

Reconfigurable Processing-in-Memory Architecture for Data Intensive Applications

S Bavikadi, PR Sutradhar, A Ganguly… - … Conference on VLSI …, 2024 - ieeexplore.ieee.org
Emerging applications reliant on deep neural networks (DNNs) and convolutional neural
networks (CNNs) demand substantial data for computation and analysis. Deploying DNNs …

Automatic mapping of the best-suited dnn pruning schemes for real-time mobile acceleration

Y Gong, G Yuan, Z Zhan, W Niu, Z Li, P Zhao… - ACM Transactions on …, 2022 - dl.acm.org
Weight pruning is an effective model compression technique to tackle the challenges of
achieving real-time deep neural network (DNN) inference on mobile devices. However, prior …

System and Design Technology Co-Optimization of SOT-MRAM for High-Performance AI Accelerator Memory System

K Mishty, M Sadi - … Transactions on Computer-Aided Design of …, 2023 - ieeexplore.ieee.org
System on chips (SoCs) are now designed with their own artificial intelligence (AI)
accelerator segment to accommodate the ever-increasing demand of deep learning (DL) …

Towards Efficient Deep Neural Network Inference and Training for Ubiquitous AI

G Yuan - 2023 - search.proquest.com
Abstract Machine learning has become increasingly popular in recent years. Due to the high
accuracy and excellent scalability, deep neural networks have emerged as a fundamental …