Disentangling factors of variation with cycle-consistent variational auto-encoders AH Jha, S Anand, M Singh, VSR Veeravasarapu Proceedings of the European Conference on Computer Vision (ECCV), 805-820, 2018 | 147 | 2018 |
TorchMetrics - Measuring Reproducibility in PyTorch N Detlefsen, J Borovec, J Schock, A Jha, T Koker, L Di Liello | 105* | 2022 |
Dolma: An open corpus of three trillion tokens for language model pretraining research L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ... arXiv preprint arXiv:2402.00159, 2024 | 34 | 2024 |
Olmo: Accelerating the science of language models D Groeneveld, I Beltagy, P Walsh, A Bhagia, R Kinney, O Tafjord, AH Jha, ... arXiv preprint arXiv:2402.00838, 2024 | 29 | 2024 |
AASAE: Augmentation-Augmented Stochastic Autoencoders W Falcon, AH Jha, T Koker, K Cho arXiv preprint arXiv:2107.12329, 2021 | 8 | 2021 |
How to train your (compressed) large language model AH Jha, T Sherborne, EP Walsh, D Groeneveld, E Strubell, I Beltagy arXiv preprint arXiv:2305.14864, 2023 | 5* | 2023 |
Paloma: A benchmark for evaluating language model fit I Magnusson, A Bhagia, V Hofmann, L Soldaini, AH Jha, O Tafjord, ... arXiv preprint arXiv:2312.10523, 2023 | 1 | 2023 |
Robust Tooling and New Resources for Large Language Model Evaluation via Catwalk K Richardson, I Magnusson, O Tafjord, A Bhagia, I Beltagy, A Cohan, ... | | |