关注
Hyeonuk Kim
Hyeonuk Kim
在 mvlsi.kaist.ac.kr 的电子邮件经过验证
标题
引用次数
引用次数
年份
Nand-net: Minimizing computational complexity of in-memory processing for binary neural networks
H Kim, J Sim, Y Choi, LS Kim
2019 IEEE international symposium on high performance computer architecture …, 2019
452019
An Energy-Efficient Deep Convolutional Neural Network Training Accelerator for In Situ Personalization on Smart Devices
S Choi, J Sim, M Kang, Y Choi, H Kim, LS Kim
IEEE Journal of Solid-State Circuits 55 (10), 2691-2702, 2020
422020
A kernel decomposition architecture for binary-weight convolutional neural networks
H Kim, J Sim, Y Choi, LS Kim
Proceedings of the 54th Annual Design Automation Conference 2017, 1-6, 2017
392017
S-FLASH: A NAND flash-based deep neural network accelerator exploiting bit-level sparsity
M Kang, H Kim, H Shin, J Sim, K Kim, LS Kim
IEEE Transactions on Computers 71 (6), 1291-1304, 2021
182021
A 47.4 µJ/epoch trainable deep convolutional neural network accelerator for in-situ personalization on smart devices
S Choi, J Sim, M Kang, Y Choi, H Kim, LS Kim
2019 IEEE Asian Solid-State Circuits Conference (A-SSCC), 57-60, 2019
72019
Compressing sparse ternary weight convolutional neural networks for efficient hardware acceleration
H Wi, H Kim, S Choi, LS Kim
2019 IEEE/ACM International Symposium on Low Power Electronics and Design …, 2019
52019
ADC-free ReRAM-based in-situ accelerator for energy-efficient binary neural networks
H Kim, Y Jung, LS Kim
IEEE Transactions on Computers 73 (2), 353-365, 2022
42022
Quantization-error-robust deep neural network for embedded accelerators
Y Jung, H Kim, Y Choi, LS Kim
IEEE Transactions on Circuits and Systems II: Express Briefs 69 (2), 609-613, 2021
42021
Energy-efficient CNN Personalized training by adaptive data reformation
Y Jung, H Kim, S Choi, J Shin, LS Kim
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2022
22022
系统目前无法执行此操作,请稍后再试。
文章 1–9