Approximate computing: Concepts, architectures, challenges, applications, and future directions

AM Dalloo, AJ Humaidi, AK Al Mhdawi… - IEEE …, 2024 - ieeexplore.ieee.org
The unprecedented progress in computational technologies led to a substantial proliferation
of artificial intelligence applications, notably in the era of big data and IoT devices. In the …

Efficient deployment of transformer models on edge tpu accelerators: A real system evaluation

BC Reidy, M Mohammadi, ME Elbtity… - Architecture and System …, 2023 - openreview.net
Transformer models have become a dominant architecture in the world of machine learning.
From natural language processing to more recent computer vision applications …

Openspike: An openram snn accelerator

F Modaresi, M Guthaus… - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
This paper presents a spiking neural network (SNN) accelerator made using fully open-
source EDA tools, process design kit (PDK), and memory macros synthesized using Open …

Flex-tpu: A flexible TPU with runtime reconfigurable dataflow architecture

M Elbtity, P Chandarana, R Zand - arXiv preprint arXiv:2407.08700, 2024 - arxiv.org
Tensor processing units (TPUs) are one of the most well-known machine learning (ML)
accelerators utilized at large scale in data centers as well as in tiny ML applications. TPUs …

Intelligence processing units accelerate neuromorphic learning

PSV Sun, A Titterton, A Gopiani, T Santos… - arXiv preprint arXiv …, 2022 - arxiv.org
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms
of energy consumption and latency when performing inference with deep learning …

Heterogeneous integration of in-memory analog computing architectures with tensor processing units

ME Elbtity, B Reidy, MH Amin, R Zand - Proceedings of the Great Lakes …, 2023 - dl.acm.org
Tensor processing units (TPUs), specialized hardware accelerators for machine learning
tasks, have shown significant performance improvements when executing convolutional …

Exploiting deep learning accelerators for neuromorphic workloads

PSV Sun, A Titterton, A Gopiani, T Santos… - Neuromorphic …, 2024 - iopscience.iop.org
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms
of energy consumption and latency when performing inference with deep learning …

A certain examination on heterogeneous systolic array (HSA) design for deep learning accelerations with low power computations

DKJ Rajanediran, CG Babu, K Priyadharsini - … Computing: Informatics and …, 2024 - Elsevier
Acceleration techniques play a crucial role in enhancing the performance of modern high-
speed computations, especially in Deep Learning (DL) applications where the speed is of …

Energy-efficient deployment of machine learning workloads on neuromorphic hardware

P Chandarana, M Mohammadi… - 2022 IEEE 13th …, 2022 - ieeexplore.ieee.org
As the technology industry is moving towards implementing tasks such as natural language
processing, path planning, image classification, and more on smaller edge computing …

STRIVE: Empowering a Low Power Tensor Processing Unit with Fault Detection and Error Resilience

ND Gundi, S Roy, K Chakraborty - ACM Transactions on Design …, 2025 - dl.acm.org
Rapid growth in Deep Neural Network (DNN) workloads has increased the energy footprint
of the Artificial Intelligence (AI) computing realm. For optimum energy efficiency, we propose …