关注
Seonghyeon Ye
Seonghyeon Ye
在 kaist.ac.kr 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Towards Continual Knowledge Learning of Language Models
J Jang, S Ye, S Yang, J Shin, J Han, G Kim, SJ Choi, M Seo
ICLR 2022, 2022
1032022
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
J Jang, S Ye, C Lee, S Yang, J Shin, J Han, G Kim, M Seo
EMNLP 2022, 2022
602022
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
J Jang, S Ye, M Seo
Transfer Learning for NLP Workshop @ NeurIPS 2022, 2022
562022
In-context instruction learning
S Ye, H Hwang, S Yang, H Yun, Y Kim, M Seo
AAAI 2024, 2024
44*2024
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo
EMNLP 2023, 2023
412023
Exploring the benefits of training expert language models over instruction tuning
J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo
ICML 2023, 2023
392023
Dimensional Emotion Detection from Categorical Emotion
S Park, J Kim, S Ye, J Jeon, HY Park, A Oh
EMNLP 2021, 2021
372021
Flask: Fine-grained language model evaluation based on alignment skill sets
S Ye, D Kim, S Kim, H Hwang, S Kim, Y Jo, J Thorne, J Kim, M Seo
ICLR 2024, 2024
362024
Selfee: Iterative self-revising llm empowered by self-feedback generation
S Ye, Y Jo, D Kim, S Kim, H Hwang, M Seo
Blog post, 2023
322023
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
S Ye, D Kim, J Jang, J Shin, M Seo
ICLR 2023, 2023
29*2023
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning
S Ye, J Kim, A Oh
EMNLP 2021, 2021
162021
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
S Ye, J Jang, D Kim, Y Jo, M Seo
EMNLP 2023 Findings, 2023
12*2023
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
H Hwang, D Kim, S Kim, S Ye, M Seo
arXiv preprint arXiv:2404.10346, 2024
22024
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
Y Kim, J Yoon, S Ye, SJ Hwang, S Yun
NAACL 2024, 2024
22024
Improving probability-based prompt selection through unified evaluation and analysis
S Yang, J Kim, J Jang, S Ye, H Lee, M Seo
TACL 2024, 2024
22024
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
H Chang, J Park, S Ye, S Yang, Y Seo, DS Chang, M Seo
arXiv preprint arXiv:2406.11813, 2024
12024
Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks
C Lee, J Han, S Ye, SJ Choi, H Lee, K Bae
arXiv preprint arXiv:2404.16418, 2024
12024
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models
H Oh, H Lee, S Ye, H Shin, H Jang, C Jun, M Seo
arXiv preprint arXiv:2402.14334, 2024
12024
系统目前无法执行此操作,请稍后再试。
文章 1–18