关注
Seungkyu Choi
Seungkyu Choi
在 kaist.ac.kr 的电子邮件经过验证
标题
引用次数
引用次数
年份
An Energy-Efficient Deep Convolutional Neural Network Training Accelerator for In Situ Personalization on Smart Devices
S Choi, J Sim, M Kang, Y Choi, H Kim, LS Kim
IEEE Journal of Solid-State Circuits 55 (10), 2691-2702, 2020
412020
Energy-efficient design of processing element for convolutional neural network
Y Choi, D Bae, J Sim, S Choi, M Kim, LS Kim
IEEE Transactions on Circuits and Systems II: Express Briefs 64 (11), 1332-1336, 2017
342017
An optimized design technique of low-bit neural network training for personalization on IoT devices
S Choi, J Shin, Y Choi, LS Kim
Proceedings of the 56th Annual Design Automation Conference 2019, 1-6, 2019
242019
TrainWare: A memory optimized weight update architecture for on-device convolutional neural network training
S Choi, J Sim, M Kang, LS Kim
Proceedings of the International Symposium on Low Power Electronics and …, 2018
242018
A pragmatic approach to on-device incremental learning system with selective weight updates
J Shin, S Choi, Y Choi, LS Kim
2020 57th ACM/IEEE Design Automation Conference (DAC), 1-6, 2020
112020
A 47.4 µJ/epoch trainable deep convolutional neural network accelerator for in-situ personalization on smart devices
S Choi, J Sim, M Kang, Y Choi, H Kim, LS Kim
2019 IEEE Asian Solid-State Circuits Conference (A-SSCC), 57-60, 2019
72019
A deep neural network training architecture with inference-aware heterogeneous data-type
S Choi, J Shin, LS Kim
IEEE Transactions on Computers 71 (5), 1216-1229, 2021
52021
Compressing sparse ternary weight convolutional neural networks for efficient hardware acceleration
H Wi, H Kim, S Choi, LS Kim
2019 IEEE/ACM International Symposium on Low Power Electronics and Design …, 2019
52019
SENIN: An energy-efficient sparse neuromorphic system with on-chip learning
MH Choi, S Choi, J Sim, LS Kim
2017 IEEE/ACM International Symposium on Low Power Electronics and Design …, 2017
42017
A convergence monitoring method for DNN training of on-device task adaptation
S Choi, J Shin, LS Kim
2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD), 1-9, 2021
32021
Algorithm/architecture co-design for energy-efficient acceleration of multi-task DNN
J Shin, S Choi, J Ra, LS Kim
Proceedings of the 59th ACM/IEEE Design Automation Conference, 253-258, 2022
22022
Method and apparatus with incremental learning moddel
K Donghyuk, KIM Leesup, S Jaekang, C SeungKyu
US Patent App. 17/089,764, 2021
22021
Accelerating On-Device DNN Training Workloads via Runtime Convergence Monitor
S Choi, J Shin, LS Kim
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2022
12022
Energy-efficient CNN Personalized training by adaptive data reformation
Y Jung, H Kim, S Choi, J Shin, LS Kim
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2022
12022
Apparatus and method with multi-task processing
JW Jang, S Jaekang, LS Kim, C SeungKyu
US Patent App. 17/903,969, 2023
2023
Method and apparatus with neural network compression
JW Jang, S Jaekang, LS Kim, C SeungKyu
US Patent App. 17/892,481, 2023
2023
Method and device for encoding
C Yeongjae, C SeungKyu, LS Kim, S Jaekang
US Patent App. 17/401,453, 2022
2022
Method and apparatus with neural network data quantizing
C SeungKyu, HA Sangwon, LS Kim, S Jaekang
US Patent App. 15/931,362, 2021
2021
Rare Computing: Removing Redundant Multiplications From Sparse and Repetitive Data in Deep Neural Networks
K Park, S Choi, Y Choi, LS Kim
IEEE Transactions on Computers 71 (4), 795-808, 2021
2021
系统目前无法执行此操作,请稍后再试。
文章 1–19