Fine samples for learning with noisy labels T Kim*, J Ko*, S Cho, JH Choi, SY Yun Advances in Neural Information Processing Systems 34, 24137-24149, 2021 | 93 | 2021 |
CUDA: Curriculum of Data Augmentation for Long-tailed Recognition S Ahn*, J Ko*, SY Yun International Conference on Learning Representations 11, 2023 | 24 | 2023 |
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding S Bae*, J Ko*, H Song, SY Yun arXiv preprint arXiv:2310.05424, 2023 | 22 | 2023 |
Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs: Model Development and Validation Study KY Son*, J Ko*, E Kim, SY Lee, MJ Kim, J Han, E Shin, TY Chung, DH Lim Ophthalmology Science 2 (2), 100147, 2022 | 22 | 2022 |
Self-Contrastive Learning S Bae*, S Kim*, J Ko, G Lee, S Noh, SY Yun arXiv preprint arXiv:2106.15499, 2021 | 7* | 2021 |
Distillm: Towards streamlined distillation for large language models J Ko, S Kim, T Chen, SY Yun arXiv preprint arXiv:2402.03898, 2024 | 6 | 2024 |
Deep Gaussian process models for integrating multifidelity experiments with nonstationary relationships J Ko, H Kim IISE Transactions 54 (7), 686-698, 2022 | 5 | 2022 |
A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise J Ko*, B Yi*, SY Yun Proceedings of the AAAI Conference on Artificial Intelligence 37 (7), 8325-8333, 2023 | 4* | 2023 |
Revisiting intermediate layer distillation for compressing language models: An overfitting perspective J Ko, S Park, M Jeong, S Hong, E Ahn, DS Chang, SY Yun arXiv preprint arXiv:2302.01530, 2023 | 3 | 2023 |
Fine tuning pre trained models for robustness under noisy labels S Ahn, S Kim, J Ko, SY Yun arXiv preprint arXiv:2310.17668, 2023 | 2 | 2023 |
NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models J Ko*, S Park*, Y Kim, S Ahn, DS Chang, E Ahn, SY Yun arXiv preprint arXiv:2310.10054, 2023 | 2 | 2023 |
Synergy with Translation Artifacts for Training and Inference in Multilingual Tasks J Oh*, J Ko*, SY Yun Empirical Methods in Natural Language Processing 2022, 6747-6754, 2022 | 2 | 2022 |
EFFICIENT UTILIZATION OF PRE-TRAINED MODEL FOR LEARNING WITH NOISY LABELS J Ko*, S Ahn*, SY Yun ICLR 2023 Workshop on Pitfalls of limited data and computation for …, 0 | 1* | |
Prune Efficiently: Improving Cost-efficiency of Structured Pruning J Ko, Y Kim, S Park, SY Yun 한국정보과학회 학술발표논문집, 464-466, 2023 | | 2023 |
Improving Adaptability and Generalizability of Efficient Transfer Learning for Vision-Language Models Y Yang*, J Ko*, SY Yun arXiv preprint arXiv:2311.15569, 2023 | | 2023 |
Improving Generalization in Reinforcement Learning via Distribution-Aware Batch Normalization J Ko, S Kim, J Kim, S Park, S Bae, SY Yun 한국정보과학회 학술발표논문집, 795-797, 2022 | | 2022 |
Client Sampling Algorithm in Federated Learning via Combinatorial Averaging and Multi-Armed Bandits S Bae, T Kim, S Ahn, S Kim, J Ko, SY Yun 한국정보과학회 학술발표논문집, 1088-1090, 2022 | | 2022 |