关注
Kang Min Yoo
Kang Min Yoo
NAVER Hyperscale AI & AI Lab
在 navercorp.com 的电子邮件经过验证
标题
引用次数
年份
Adaptive Contrastive Decoding in Retrieval-Augmented Generation for Handling Noisy Contexts
Y Kim, HJ Kim, C Park, C Park, H Cho, J Kim, KM Yoo, S Lee, T Kim
arXiv preprint arXiv:2408.01084, 2024
2024
LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
JH Lee, J Kim, JY Yang, SJ Kwon, E Yang, KM Yoo, D Lee
arXiv preprint arXiv:2407.11534, 2024
2024
Token-Supervised Value Models for Enhancing Mathematical Reasoning Capabilities of Large Language Models
JH Lee, JY Yang, B Heo, D Han, KM Yoo
arXiv preprint arXiv:2407.12863, 2024
2024
Investigating the Influence of Prompt-Specific Shortcuts in AI Generated Text Detection
C Park, HJ Kim, J Kim, Y Kim, T Kim, H Cho, H Jo, S Lee, KM Yoo
arXiv preprint arXiv:2406.16275, 2024
2024
HyperCLOVA X Technical Report
KM Yoo, J Han, S In, H Jeon, J Jeong, J Kang, H Kim, KM Kim, M Kim, ...
arXiv preprint arXiv:2404.01954, 2024
12024
Kmmlu: Measuring massive multitask language understanding in korean
G Son, H Lee, S Kim, S Kim, N Muennighoff, T Choi, C Park, KM Yoo, ...
arXiv preprint arXiv:2402.11548, 2024
142024
Aligning Large Language Models by On-Policy Self-Judgment
S Lee, S Kim, A Yousefpour, M Seo, KM Yoo, Y Yu
arXiv preprint arXiv:2402.11253, 2024
12024
Memory-efficient fine-tuning of compressed large language models via sub-4-bit integer quantization
J Kim, JH Lee, S Kim, J Park, KM Yoo, SJ Kwon, D Lee
Advances in Neural Information Processing Systems 36, 2024
542024
Unified Speech-Text Pretraining for Spoken Dialog Modeling
H Kim, S Seo, K Jeong, O Kwon, J Kim, J Lee, E Song, M Oh, S Yoon, ...
arXiv preprint arXiv:2402.05706, 2024
42024
On the Analysis of Cross-Lingual Prompt Tuning for Decoder-based Multilingual Model
N Park, J Park, KM Yoo, S Yoon
arXiv preprint arXiv:2311.07820, 2023
32023
Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP
HJ Kim, H Cho, SW Lee, J Kim, C Park, S Lee, KM Yoo, T Kim
arXiv preprint arXiv:2310.14849, 2023
12023
Instruction tuning with human curriculum
BW Lee, H Cho, KM Yoo
arXiv preprint arXiv:2310.09518, 2023
62023
HyperT5: Towards Compute-Efficient Korean Language Modeling
D Park, S Ka, KM Yoo, G Lee, J Kang
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
2023
Prompt-augmented linear probing: Scaling beyond the limit of few-shot in-context learners
H Cho, HJ Kim, J Kim, SW Lee, S Lee, KM Yoo, T Kim
Proceedings of the AAAI Conference on Artificial Intelligence 37 (11), 12709 …, 2023
182023
Aligning large language models through synthetic feedback
S Kim, S Bae, J Shin, S Kang, D Kwak, KM Yoo, M Seo
arXiv preprint arXiv:2305.13735, 2023
402023
Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
H Cho, C Park, J Kim, HJ Kim, KM Yoo, S Lee
arXiv preprint arXiv:2301.11660, 2023
22023
Critic-guided decoding for controlled text generation
M Kim, H Lee, KM Yoo, J Park, H Lee, K Jung
arXiv preprint arXiv:2212.10938, 2022
202022
Mutual information divergence: A unified metric for multimodal generative models
JH Kim, Y Kim, J Lee, KM Yoo, SW Lee
Advances in Neural Information Processing Systems 35, 35072-35086, 2022
202022
Enhancing out-of-distribution detection in natural language understanding via implicit layer ensemble
H Cho, C Park, J Kang, KM Yoo, T Kim, S Lee
arXiv preprint arXiv:2210.11034, 2022
62022
Alphatuning: Quantization-aware parameter-efficient adaptation of large-scale pre-trained language models
SJ Kwon, J Kim, J Bae, KM Yoo, JH Kim, B Park, B Kim, JW Ha, N Sung, ...
arXiv preprint arXiv:2210.03858, 2022
292022
系统目前无法执行此操作,请稍后再试。
文章 1–20