关注
Arkil Patel
Arkil Patel
Grad Student, Mila and McGill University
在 mila.quebec 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Are NLP Models really able to Solve Simple Math Word Problems?
A Patel, S Bhattamishra, N Goyal
NAACL, 2021
4302021
On the computational power of transformers and its implications in sequence modeling
S Bhattamishra, A Patel, N Goyal
CoNLL, 2020
592020
Vehiclechain: blockchain-based vehicular data transmission scheme for smart city
A Patel, N Shah, T Limbasiya, D Das
IEEE - SMC, 2019
242019
Revisiting the Compositional Generalization Abilities of Neural Sequence Models
A Patel, S Bhattamishra, P Blunsom, N Goyal
ACL, 2022
232022
Understanding in-context learning in transformers and llms by learning to learn discrete functions
S Bhattamishra, A Patel, P Blunsom, V Kanade
ICLR, 2023
192023
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
S Bhattamishra, A Patel, V Kanade, P Blunsom
ACL, 2023
192023
When Can Transformers Ground and Compose: Insights from Compositional Generalization Benchmarks
A Sikarwar, A Patel, N Goyal
EMNLP, 2022
92022
Evaluating In-Context Learning of Libraries for Code Generation
A Patel, S Reddy, D Bahdanau, P Dasigi
NAACL, 2024
32024
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
A Patel, S Bhattamishra, S Reddy, D Bahdanau
EMNLP, 2023
32023
Universal adversarial triggers are not universal
N Meade, A Patel, S Reddy
arXiv preprint arXiv:2404.16020, 2024
22024
系统目前无法执行此操作,请稍后再试。
文章 1–10