关注
Kang Min Yoo
Kang Min Yoo
NAVER Hyperscale AI & AI Lab
在 navercorp.com 的电子邮件经过验证
标题
引用次数
年份
Adaptive Contrastive Decoding in Retrieval-Augmented Generation for Handling Noisy Contexts
Y Kim, HJ Kim, C Park, C Park, H Cho, J Kim, KM Yoo, S Lee, T Kim
arXiv preprint arXiv:2408.01084, 2024
2024
Aligning Large Language Models by On-Policy Self-Judgment
S Lee, S Kim, A Yousefpour, M Seo, KM Yoo, Y Yu
arXiv preprint arXiv:2402.11253, 2024
12024
Aligning large language models through synthetic feedback
S Kim, S Bae, J Shin, S Kang, D Kwak, KM Yoo, M Seo
arXiv preprint arXiv:2305.13735, 2023
402023
Alphatuning: Quantization-aware parameter-efficient adaptation of large-scale pre-trained language models
SJ Kwon, J Kim, J Bae, KM Yoo, JH Kim, B Park, B Kim, JW Ha, N Sung, ...
arXiv preprint arXiv:2210.03858, 2022
282022
Analysis on Correlation between Prescriptions and Test Results of Diabetes Patients using Graph Models and Node Centrality
KM Yoo, S Park, S Rhee, KS Yu, S Lee
KIISE Transactions on Computing Practices 21 (7), 482-487, 2015
2015
Attribute injection for pretrained language models: A new benchmark and an efficient method
RK Amplayo, KM Yoo, SW Lee
Proceedings of the 29th International Conference on Computational …, 2022
72022
Continuous decomposition of granularity for neural paraphrase generation
X Gu, Z Zhang, SW Lee, KM Yoo, JW Ha
arXiv preprint arXiv:2209.01765, 2022
42022
Critic-guided decoding for controlled text generation
M Kim, H Lee, KM Yoo, J Park, H Lee, K Jung
arXiv preprint arXiv:2212.10938, 2022
182022
Data augmentation for spoken language understanding via joint variational generation
KM Yoo, Y Shin, S Lee
Proceedings of the AAAI conference on artificial intelligence 33 (01), 7402-7409, 2019
852019
Deep Generative Data Augmentation for Natural Language Processing
KM Yoo
서울대학교 대학원, 2020
12020
Dialogbert: Discourse-aware response generation via learning to recover and rank utterances
X Gu, KM Yoo, JW Ha
Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12911 …, 2021
782021
Don't Just Scratch the Surface: Enhancing Word Representations for Korean with Hanja
KM Yoo, T Kim, S Lee
arXiv preprint arXiv:1908.09282, 2019
22019
Enhancing out-of-distribution detection in natural language understanding via implicit layer ensemble
H Cho, C Park, J Kang, KM Yoo, T Kim, S Lee
arXiv preprint arXiv:2210.11034, 2022
62022
Generating information-seeking conversations from unlabeled documents
G Kim, S Kim, KM Yoo, J Kang
arXiv preprint arXiv:2205.12609, 2022
14*2022
Gpt3mix: Leveraging large-scale language models for text augmentation
KM Yoo, D Park, J Kang, SW Lee, W Park
arXiv preprint arXiv:2104.08826, 2021
2042021
Ground-truth labels matter: A deeper look into input-label demonstrations
KM Yoo, J Kim, HJ Kim, H Cho, H Jo, SW Lee, S Lee, T Kim
arXiv preprint arXiv:2205.12685, 2022
742022
HyperCLOVA X Technical Report
KM Yoo, J Han, S In, H Jeon, J Jeong, J Kang, H Kim, KM Kim, M Kim, ...
arXiv preprint arXiv:2404.01954, 2024
12024
HyperT5: Towards Compute-Efficient Korean Language Modeling
D Park, S Ka, KM Yoo, G Lee, J Kang
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
2023
Improving visually grounded sentence representations with self-attention
KM Yoo, Y Shin, S Lee
arXiv preprint arXiv:1712.00609, 2017
72017
Instruction tuning with human curriculum
BW Lee, H Cho, KM Yoo
arXiv preprint arXiv:2310.09518, 2023
52023
系统目前无法执行此操作,请稍后再试。
文章 1–20