Language models are few-shot learners T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ... Advances in neural information processing systems 33, 1877-1901, 2020 | 27294 | 2020 |
Learning transferable visual models from natural language supervision A Radford, JW Kim, C Hallacy, A Ramesh, G Goh, S Agarwal, G Sastry, ... International conference on machine learning, 8748-8763, 2021 | 17715 | 2021 |
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford …, 2020 | 7141 | 2020 |
Evaluating large language models trained on code M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ... arXiv preprint arXiv:2107.03374, 2021 | 2269 | 2021 |
Gpt-4 technical report J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ... arXiv preprint arXiv:2303.08774, 2023 | 1669 | 2023 |
Webgpt: Browser-assisted question-answering with human feedback R Nakano, J Hilton, S Balaji, J Wu, L Ouyang, C Kim, C Hesse, S Jain, ... arXiv preprint arXiv:2112.09332, 2021 | 760 | 2021 |
Release strategies and the social impacts of language models I Solaiman, M Brundage, J Clark, A Askell, A Herbert-Voss, J Wu, ... arXiv preprint arXiv:1908.09203, 2019 | 435 | 2019 |
Toward trustworthy AI development: mechanisms for supporting verifiable claims M Brundage, S Avin, J Wang, H Belfield, G Krueger, G Hadfield, H Khlaaf, ... arXiv preprint arXiv:2004.07213, 2020 | 346 | 2020 |
Text and code embeddings by contrastive pre-training A Neelakantan, T Xu, R Puri, A Radford, JM Han, J Tworek, Q Yuan, ... arXiv preprint arXiv:2201.10005, 2022 | 280 | 2022 |
Language Models are Few-Shot Learners. 2020. doi: 10.48550 TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arxiv, 5-7, 2005 | 164 | 2005 |
Language models are few-shot learners. arXiv TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Computer Science, Computation and Language, 2005 | 156 | 2005 |
Evaluating clip: towards characterization of broader capabilities and downstream implications S Agarwal, G Krueger, J Clark, A Radford, JW Kim, M Brundage arXiv preprint arXiv:2108.02818, 2021 | 103 | 2021 |
Learning transferable visual models from natural language supervision. arXiv A Radford, JW Kim, C Hallacy, A Ramesh, G Goh, S Agarwal, G Sastry, ... arXiv preprint arXiv:2103.00020, 2021 | 87 | 2021 |
DALL· E: Creating images from text A Ramesh, M Pavlov, G Goh, S Gray, M Chen, R Child, V Misra, P Mishkin, ... OpenAI blog. https://openai. com/blog/dall-e, 2021 | 84 | 2021 |
Language models are few-shot learners. CoRR abs/2005.14165 (2020) TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... URL: https://arxiv. org/abs/2005.14165, 2005 | 74 | 2005 |
Language models are few-shot learners B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, ... arXiv preprint arXiv:2005.14165, 2020 | 64 | 2020 |
& Amodei, D.(2020) TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Language models are few-shot learners, 2005 | 59 | 2005 |
Evaluating large language models trained on code. arXiv 2021 M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ... arXiv preprint arXiv:2107.03374 10, 2021 | 50 | 2021 |
Filling gaps in trustworthy development of AI S Avin, H Belfield, M Brundage, G Krueger, J Wang, A Weller, ... Science 374 (6573), 1327-1329, 2021 | 35 | 2021 |
A hazard analysis framework for code synthesis large language models H Khlaaf, P Mishkin, J Achiam, G Krueger, M Brundage arXiv preprint arXiv:2207.14157, 2022 | 18 | 2022 |