Towards Continual Knowledge Learning of Language Models J Jang, S Ye, S Yang, J Shin, J Han, G Kim, SJ Choi, M Seo ICLR 2022, 2022 | 103 | 2022 |
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models J Jang, S Ye, C Lee, S Yang, J Shin, J Han, G Kim, M Seo EMNLP 2022, 2022 | 60 | 2022 |
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts J Jang, S Ye, M Seo Transfer Learning for NLP Workshop @ NeurIPS 2022, 2022 | 56 | 2022 |
In-context instruction learning S Ye, H Hwang, S Yang, H Yun, Y Kim, M Seo AAAI 2024, 2024 | 44* | 2024 |
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo EMNLP 2023, 2023 | 41 | 2023 |
Exploring the benefits of training expert language models over instruction tuning J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo ICML 2023, 2023 | 39 | 2023 |
Dimensional Emotion Detection from Categorical Emotion S Park, J Kim, S Ye, J Jeon, HY Park, A Oh EMNLP 2021, 2021 | 37 | 2021 |
Flask: Fine-grained language model evaluation based on alignment skill sets S Ye, D Kim, S Kim, H Hwang, S Kim, Y Jo, J Thorne, J Kim, M Seo ICLR 2024, 2024 | 36 | 2024 |
Selfee: Iterative self-revising llm empowered by self-feedback generation S Ye, Y Jo, D Kim, S Kim, H Hwang, M Seo Blog post, 2023 | 32 | 2023 |
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners S Ye, D Kim, J Jang, J Shin, M Seo ICLR 2023, 2023 | 29* | 2023 |
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning S Ye, J Kim, A Oh EMNLP 2021, 2021 | 16 | 2021 |
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt S Ye, J Jang, D Kim, Y Jo, M Seo EMNLP 2023 Findings, 2023 | 12* | 2023 |
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards H Hwang, D Kim, S Kim, S Ye, M Seo arXiv preprint arXiv:2404.10346, 2024 | 2 | 2024 |
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models Y Kim, J Yoon, S Ye, SJ Hwang, S Yun NAACL 2024, 2024 | 2 | 2024 |
Improving probability-based prompt selection through unified evaluation and analysis S Yang, J Kim, J Jang, S Ye, H Lee, M Seo TACL 2024, 2024 | 2 | 2024 |
How Do Large Language Models Acquire Factual Knowledge During Pretraining? H Chang, J Park, S Ye, S Yang, Y Seo, DS Chang, M Seo arXiv preprint arXiv:2406.11813, 2024 | 1 | 2024 |
Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks C Lee, J Han, S Ye, SJ Choi, H Lee, K Bae arXiv preprint arXiv:2404.16418, 2024 | 1 | 2024 |
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models H Oh, H Lee, S Ye, H Shin, H Jang, C Jun, M Seo arXiv preprint arXiv:2402.14334, 2024 | 1 | 2024 |