BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding J Devlin, MW Chang, K Lee, K Toutanova arXiv preprint arXiv:1810.04805, 2018 | 105765 | 2018 |
Deep contextualized word representations. NAACL-HLT ME Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, ... Association for Computational Linguistics2227–2237, 2018 | 15405* | 2018 |
Natural questions: a benchmark for question answering research T Kwiatkowski, J Palomaki, O Redfield, M Collins, A Parikh, C Alberti, ... Transactions of the Association for Computational Linguistics 7, 453-466, 2019 | 2360 | 2019 |
REALM: Retrieval-Augmented Language Model Pre-Training K Guu, K Lee, Z Tung, P Pasupat, MW Chang arXiv preprint arXiv:2002.08909, 2020 | 1540* | 2020 |
End-to-end Neural Coreference Resolution K Lee, L He, M Lewis, L Zettlemoyer EMNLP, 2017 | 1109 | 2017 |
Latent Retrieval for Weakly Supervised Open Domain Question Answering K Lee, MW Chang, K Toutanova arXiv preprint arXiv:1906.00300, 2019 | 927 | 2019 |
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions C Clark, K Lee, MW Chang, T Kwiatkowski, M Collins, K Toutanova arXiv preprint arXiv:1905.10044, 2019 | 883 | 2019 |
Well-read students learn better: On the importance of pre-training compact models I Turc, MW Chang, K Lee, K Toutanova arXiv preprint arXiv:1908.08962, 2019 | 849* | 2019 |
Legibility and predictability of robot motion AD Dragan, KCT Lee, SS Srinivasa 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI …, 2013 | 826 | 2013 |
Higher-order Coreference Resolution with Coarse-to-fine Inference K Lee, L He, L Zettlemoyer arXiv preprint arXiv:1804.05392, 2018 | 571 | 2018 |
Deep semantic role labeling: What works and what’s next L He, K Lee, M Lewis, L Zettlemoyer Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017 | 565 | 2017 |
Zero-Shot Entity Linking by Reading Entity Descriptions L Logeswaran, MW Chang, K Lee, K Toutanova, J Devlin, H Lee arXiv preprint arXiv:1906.07348, 2019 | 268 | 2019 |
Jointly predicting predicates and arguments in neural semantic role labeling L He, K Lee, O Levy, L Zettlemoyer arXiv preprint arXiv:1805.04787, 2018 | 218 | 2018 |
Broad-coverage CCG Semantic Parsing with AMR Y Artzi, K Lee, L Zettlemoyer Proceedings of the 2015 Conference on Empirical Methods in Natural Language …, 2015 | 193 | 2015 |
Learning Recurrent Span Representations for Extractive Question Answering K Lee, S Salant, T Kwiatkowski, A Parikh, D Das, J Berant arXiv preprint arXiv:1611.01436, 2016 | 164 | 2016 |
Pix2struct: Screenshot parsing as pretraining for visual language understanding K Lee, M Joshi, IR Turc, H Hu, F Liu, JM Eisenschlos, U Khandelwal, ... International Conference on Machine Learning, 18893-18912, 2023 | 138 | 2023 |
A BERT Baseline for the Natural Questions C Alberti, K Lee, M Collins arXiv preprint arXiv:1901.08634, 2019 | 136 | 2019 |
XOR QA: Cross-lingual Open-Retrieval Question Answering A Asai, J Kasai, JH Clark, K Lee, E Choi, H Hajishirzi arXiv preprint arXiv:2010.11856, 2020 | 120 | 2020 |
Syntactic Scaffolds for Semantic Structures S Swayamdipta, S Thomson, K Lee, L Zettlemoyer, C Dyer, NA Smith arXiv preprint arXiv:1808.10485, 2018 | 118 | 2018 |
Context-dependent semantic parsing for time expressions K Lee, Y Artzi, J Dodge, L Zettlemoyer Proceedings of the Conference of the Association for Computational …, 2014 | 111 | 2014 |