Nand-net: Minimizing computational complexity of in-memory processing for binary neural networks H Kim, J Sim, Y Choi, LS Kim 2019 IEEE international symposium on high performance computer architecture …, 2019 | 45 | 2019 |
An Energy-Efficient Deep Convolutional Neural Network Training Accelerator for In Situ Personalization on Smart Devices S Choi, J Sim, M Kang, Y Choi, H Kim, LS Kim IEEE Journal of Solid-State Circuits 55 (10), 2691-2702, 2020 | 42 | 2020 |
A kernel decomposition architecture for binary-weight convolutional neural networks H Kim, J Sim, Y Choi, LS Kim Proceedings of the 54th Annual Design Automation Conference 2017, 1-6, 2017 | 39 | 2017 |
S-FLASH: A NAND flash-based deep neural network accelerator exploiting bit-level sparsity M Kang, H Kim, H Shin, J Sim, K Kim, LS Kim IEEE Transactions on Computers 71 (6), 1291-1304, 2021 | 18 | 2021 |
A 47.4 µJ/epoch trainable deep convolutional neural network accelerator for in-situ personalization on smart devices S Choi, J Sim, M Kang, Y Choi, H Kim, LS Kim 2019 IEEE Asian Solid-State Circuits Conference (A-SSCC), 57-60, 2019 | 7 | 2019 |
Compressing sparse ternary weight convolutional neural networks for efficient hardware acceleration H Wi, H Kim, S Choi, LS Kim 2019 IEEE/ACM International Symposium on Low Power Electronics and Design …, 2019 | 5 | 2019 |
ADC-free ReRAM-based in-situ accelerator for energy-efficient binary neural networks H Kim, Y Jung, LS Kim IEEE Transactions on Computers 73 (2), 353-365, 2022 | 4 | 2022 |
Quantization-error-robust deep neural network for embedded accelerators Y Jung, H Kim, Y Choi, LS Kim IEEE Transactions on Circuits and Systems II: Express Briefs 69 (2), 609-613, 2021 | 4 | 2021 |
Energy-efficient CNN Personalized training by adaptive data reformation Y Jung, H Kim, S Choi, J Shin, LS Kim IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2022 | 2 | 2022 |