Memristive technologies for data storage, computation, encryption, and radio-frequency communication

M Lanza, A Sebastian, WD Lu, M Le Gallo, MF Chang… - Science, 2022 - science.org
Memristive devices, which combine a resistor with memory functions such that voltage
pulses can change their resistance (and hence their memory state) in a nonvolatile manner …

[HTML][HTML] A compute-in-memory chip based on resistive random-access memory

W Wan, R Kubendran, C Schaefer, SB Eryilmaz… - Nature, 2022 - nature.com
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge
devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory …

[HTML][HTML] The big chip: Challenge, model and architecture

Y Han, H Xu, M Lu, H Wang, J Huang, Y Wang… - Fundamental …, 2024 - Elsevier
Abstract As Moore's Law comes to an end, the implementation of high-performance chips
through transistor scaling has become increasingly challenging. To improve performance …

A survey on deep learning hardware accelerators for heterogeneous hpc platforms

C Silvano, D Ielmini, F Ferrandi, L Fiorin… - arXiv preprint arXiv …, 2023 - arxiv.org
Recent trends in deep learning (DL) imposed hardware accelerators as the most viable
solution for several classes of high-performance computing (HPC) applications such as …

Mixed-signal computing for deep neural network inference

B Murmann - IEEE Transactions on Very Large Scale …, 2020 - ieeexplore.ieee.org
Modern deep neural networks (DNNs) require billions of multiply-accumulate operations per
inference. Given that these computations demand relatively low precision, it is feasible to …

PIMCA: A programmable in-memory computing accelerator for energy-efficient DNN inference

B Zhang, S Yin, M Kim, J Saikia, S Kwon… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
This article presents a programmable in-memory computing accelerator (PIMCA) for low-
precision (1–2 b) deep neural network (DNN) inference. The custom 10T1C bitcell in the in …

Filament Engineering of Two‐Dimensional h‐BN for a Self‐Power Mechano‐Nociceptor System

G Ding, RS Chen, P Xie, B Yang, G Shang, Y Liu… - Small, 2022 - Wiley Online Library
The switching variability caused by intrinsic stochasticity of the ionic/atomic motions during
the conductive filaments (CFs) formation process largely limits the applications of diffusive …

Nn-baton: Dnn workload orchestration and chiplet granularity exploration for multichip accelerators

Z Tan, H Cai, R Dong, K Ma - 2021 ACM/IEEE 48th Annual …, 2021 - ieeexplore.ieee.org
The revolution of machine learning poses an unprecedented demand for computation
resources, urging more transistors on a single monolithic chip, which is not sustainable in …

CHIMERA: A 0.92-TOPS, 2.2-TOPS/W edge AI accelerator with 2-MByte on-chip foundry resistive RAM for efficient training and inference

K Prabhu, A Gural, ZF Khan… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
Implementing edge artificial intelligence (AI) inference and training is challenging with
current memory technologies. As deep neural networks (DNNs) grow in size, this problem is …

A 95.6-TOPS/W deep learning inference accelerator with per-vector scaled 4-bit quantization in 5 nm

B Keller, R Venkatesan, S Dai, SG Tell… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
The energy efficiency of deep neural network (DNN) inference can be improved with custom
accelerators. DNN inference accelerators often employ specialized hardware techniques to …