关注
Kang Min Yoo
Kang Min Yoo
NAVER Hyperscale AI & AI Lab
在 navercorp.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Learning to compose task-specific tree structures
J Choi, KM Yoo, S Lee
Proceedings of the AAAI Conference on Artificial Intelligence 32 (1), 2018
222*2018
Self-guided contrastive learning for BERT sentence representations
T Kim, KM Yoo, S Lee
arXiv preprint arXiv:2106.07345, 2021
1852021
TaleBrush: Sketching stories with generative pretrained language models
JJY Chung, W Kim, KM Yoo, H Lee, E Adar, M Chang
Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems …, 2022
1772022
Gpt3mix: Leveraging large-scale language models for text augmentation
KM Yoo, D Park, J Kang, SW Lee, W Park
arXiv preprint arXiv:2104.08826, 2021
1722021
What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers
B Kim, HS Kim, SW Lee, G Lee, D Kwak, DH Jeon, S Park, S Kim, S Kim, ...
arXiv preprint arXiv:2109.04650, 2021
962021
Data augmentation for spoken language understanding via joint variational generation
KM Yoo, Y Shin, S Lee
Proceedings of the AAAI conference on artificial intelligence 33 (01), 7402-7409, 2019
842019
Dialogbert: Discourse-aware response generation via learning to recover and rank utterances
X Gu, KM Yoo, JW Ha
Proceedings of the AAAI Conference on Artificial Intelligence 35 (14), 12911 …, 2021
762021
Ground-truth labels matter: A deeper look into input-label demonstrations
KM Yoo, J Kim, HJ Kim, H Cho, H Jo, SW Lee, S Lee, T Kim
arXiv preprint arXiv:2205.12685, 2022
67*2022
Memory-efficient fine-tuning of compressed large language models via sub-4-bit integer quantization
J Kim, JH Lee, S Kim, J Park, KM Yoo, SJ Kwon, D Lee
Advances in Neural Information Processing Systems 36, 2024
422024
Self-generated in-context learning: Leveraging auto-regressive language models as a demonstration generator
HJ Kim, H Cho, J Kim, T Kim, KM Yoo, S Lee
arXiv preprint arXiv:2206.08082, 2022
362022
Aligning large language models through synthetic feedback
S Kim, S Bae, J Shin, S Kang, D Kwak, KM Yoo, M Seo
arXiv preprint arXiv:2305.13735, 2023
292023
Alphatuning: Quantization-aware parameter-efficient adaptation of large-scale pre-trained language models
SJ Kwon, J Kim, J Bae, KM Yoo, JH Kim, B Park, B Kim, JW Ha, N Sung, ...
arXiv preprint arXiv:2210.03858, 2022
262022
Response generation with context-aware prompt learning
X Gu, KM Yoo, SW Lee
arXiv preprint arXiv:2111.02643, 2021
232021
Leveraging class hierarchy in fashion classification
H Cho, C Ahn, K Min Yoo, J Seol, S Lee
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019
222019
Variational hierarchical dialog autoencoder for dialog state tracking data augmentation
KM Yoo, H Lee, F Dernoncourt, T Bui, W Chang, S Lee
arXiv preprint arXiv:2001.08604, 2020
172020
Critic-guided decoding for controlled text generation
M Kim, H Lee, KM Yoo, J Park, H Lee, K Jung
arXiv preprint arXiv:2212.10938, 2022
162022
Mutual information divergence: A unified metric for multimodal generative models
JH Kim, Y Kim, J Lee, KM Yoo, SW Lee
Advances in Neural Information Processing Systems 35, 35072-35086, 2022
142022
Prompt-augmented linear probing: Scaling beyond the limit of few-shot in-context learners
H Cho, HJ Kim, J Kim, SW Lee, S Lee, KM Yoo, T Kim
Proceedings of the AAAI Conference on Artificial Intelligence 37 (11), 12709 …, 2023
132023
Generating information-seeking conversations from unlabeled documents
G Kim, S Kim, KM Yoo, J Kang
arXiv preprint arXiv:2205.12609, 2022
13*2022
Utterance generation with variational auto-encoder for slot filling in spoken language understanding
Y Shin, KM Yoo, SG Lee
IEEE Signal Processing Letters 26 (3), 505-509, 2019
122019
系统目前无法执行此操作,请稍后再试。
文章 1–20