关注
Ananya Harsh Jha
Ananya Harsh Jha
Allen Institute for AI
在 allenai.org 的电子邮件经过验证
标题
引用次数
引用次数
年份
Disentangling factors of variation with cycle-consistent variational auto-encoders
AH Jha, S Anand, M Singh, VSR Veeravasarapu
Proceedings of the European Conference on Computer Vision (ECCV), 805-820, 2018
1472018
TorchMetrics - Measuring Reproducibility in PyTorch
N Detlefsen, J Borovec, J Schock, A Jha, T Koker, L Di Liello
105*2022
Dolma: An open corpus of three trillion tokens for language model pretraining research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
arXiv preprint arXiv:2402.00159, 2024
342024
Olmo: Accelerating the science of language models
D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ...
arXiv preprint arXiv:2402.00838, 2024
292024
AASAE: Augmentation-Augmented Stochastic Autoencoders
W Falcon, AH Jha, T Koker, K Cho
arXiv preprint arXiv:2107.12329, 2021
82021
How to train your (compressed) large language model
AH Jha, T Sherborne, EP Walsh, D Groeneveld, E Strubell, I Beltagy
arXiv preprint arXiv:2305.14864, 2023
5*2023
Paloma: A benchmark for evaluating language model fit
I Magnusson, A Bhagia, V Hofmann, L Soldaini, AH Jha, O Tafjord, ...
arXiv preprint arXiv:2312.10523, 2023
12023
Robust Tooling and New Resources for Large Language Model Evaluation via Catwalk
K Richardson, I Magnusson, O Tafjord, A Bhagia, I Beltagy, A Cohan, ...
系统目前无法执行此操作,请稍后再试。
文章 1–8