The Power of Scale for Parameter-Efficient Prompt Tuning B Lester, R Al-Rfou, N Constant arXiv preprint arXiv:2104.08691, 2021 | 2815 | 2021 |
Universal Sentence Encoder D Cer, Y Yang, S Kong, N Hua, N Limtiaco, RS John, N Constant, ... arXiv preprint arXiv:1803.11175, 2018 | 2714* | 2018 |
mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer L Xue, N Constant, A Roberts, M Kale, R Al-Rfou, A Siddhant, A Barua, ... arXiv preprint arXiv:2010.11934, 2020 | 1984 | 2020 |
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... arXiv preprint arXiv:2206.04615, 2022 | 869 | 2022 |
Multilingual Universal Sentence Encoder for Semantic Retrieval Y Yang, D Cer, A Ahmad, M Guo, J Law, N Constant, GH Abrego, S Yuan, ... arXiv preprint arXiv:1907.04307, 2019 | 511 | 2019 |
Character-Level Language Modeling with Deeper Self-Attention R Al-Rfou, D Choe, N Constant, M Guo, L Jones Proceedings of the AAAI conference on artificial intelligence 33 (01), 3159-3166, 2019 | 436 | 2019 |
ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models L Xue, A Barua, N Constant, R Al-Rfou, S Narang, M Kale, A Roberts, ... Transactions of the Association for Computational Linguistics 10, 291-306, 2022 | 345 | 2022 |
Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models J Ni, GH Ábrego, N Constant, J Ma, KB Hall, D Cer, Y Yang arXiv preprint arXiv:2108.08877, 2021 | 324 | 2021 |
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer T Vu, B Lester, N Constant, R Al-Rfou, D Cer arXiv preprint arXiv:2110.07904, 2021 | 224 | 2021 |
Contrastive Topic: Meanings and Realizations N Constant | 209 | 2014 |
Learning Semantic Textual Similarity from Conversations Y Yang, S Yuan, D Cer, S Kong, N Constant, P Pilar, H Ge, YH Sung, ... arXiv preprint arXiv:1804.07754, 2018 | 179 | 2018 |
XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation S Ruder, N Constant, J Botha, A Siddhant, O Firat, J Fu, P Liu, J Hu, ... arXiv preprint arXiv:2104.07412, 2021 | 130 | 2021 |
Effective Parallel Corpus Mining using Bilingual Sentence Embeddings M Guo, Q Shen, Y Yang, H Ge, D Cer, GH Abrego, K Stevens, N Constant, ... arXiv preprint arXiv:1807.11906, 2018 | 119 | 2018 |
English rise-fall-rise: A study in the semantics and pragmatics of intonation N Constant Linguistics & Philosophy 35 (5), 407-442, 2012 | 109 | 2012 |
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation T Vu, M Iyyer, X Wang, N Constant, J Wei, J Wei, C Tar, YH Sung, D Zhou, ... arXiv preprint arXiv:2310.03214, 2023 | 68 | 2023 |
The pragmatics of expressive content: Evidence from large corpora N Constant, C Davis, C Potts, F Schwarz Sprache und Datenverarbeitung 33 (1-2), 5-21, 2009 | 68 | 2009 |
ReQA: An Evaluation for End-to-End Answer Retrieval Models A Ahmad, N Constant, Y Yang, D Cer arXiv preprint arXiv:1907.04780, 2019 | 55 | 2019 |
TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling P Riley, N Constant, M Guo, G Kumar, D Uthus, Z Parekh arXiv preprint arXiv:2010.03802, 2020 | 53* | 2020 |
LAReQA: Language-agnostic answer retrieval from a multilingual pool U Roy, N Constant, R Al-Rfou, A Barua, A Phillips, Y Yang arXiv preprint arXiv:2004.05484, 2020 | 53 | 2020 |
Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation T Vu, A Barua, B Lester, D Cer, M Iyyer, N Constant arXiv preprint arXiv:2205.12647, 2022 | 47 | 2022 |