Enhancing few-shot text-to-sql capabilities of large language models: A study on prompt design strategies L Nan, Y Zhao, W Zou, N Ri, J Tae, E Zhang, A Cohan, D Radev arXiv preprint arXiv:2305.12586, 2023 | 47* | 2023 |
Do models explain themselves? counterfactual simulatability of natural language explanations Y Chen, R Zhong, N Ri, C Zhao, H He, J Steinhardt, Z Yu, K McKeown arXiv preprint arXiv:2307.08678, 2023 | 28 | 2023 |
Contrastive loss is all you need to recover analogies as parallel lines N Ri, FT Lee, N Verma arXiv preprint arXiv:2306.08221, 2023 | 4 | 2023 |
Latent Space Interpretation for Stylistic Analysis and Explainable Authorship Attribution M Alshomary, N Ri, M Apidianaki, A Patel, S Muresan, K McKeown arXiv preprint arXiv:2409.07072, 2024 | | 2024 |
The Effect of Model Capacity on the Emergence of In-Context Learning B Ottlik, N Ri, D Hsu, C Sanford ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation …, 2024 | | 2024 |