When Can Transformers Ground and Compose: Insights from Compositional Generalization Benchmarks A Sikarwar, A Patel, N Goyal EMNLP 2022, 2022 | 10 | 2022 |
On the Efficacy of Co-Attention Transformer Layers in Visual Question Answering A Sikarwar, G Kreiman arXiv preprint arXiv:2201.03965, 2022 | 4 | 2022 |
Learning to Learn: How to Continuously Teach Humans and Machines P Singh, Y Li, A Sikarwar, W Lei, D Gao, MB Talbot, Y Sun, MZ Shou, ... ICCV 2023, 2023 | 3 | 2023 |
Reason from context with self-supervised learning X Liu, A Sikarwar, G Kreiman, Z Shi, M Zhang arXiv preprint arXiv:2211.12817, 2023 | 2 | 2023 |
Human or Machine? Turing Tests for Vision and Language M Zhang, G Dellaferrera, A Sikarwar, M Armendariz, N Mudrik, P Agrawal, ... arXiv preprint arXiv:2211.13087, 2022 | 1 | 2022 |
Decoding the Enigma: Benchmarking Humans and AIs on the Many Facets of Working Memory A Sikarwar, M Zhang NeurIPS 2023, Datasets and Benchmarks Track, 2023 | | 2023 |
Supplementary Material Learning to Learn: How to Continuously Teach Humans and Machines P Singh, L You, A Sikarwar, W Lei, D Gao, MB Talbot, Y Sun, MZ Shou, ... | | |