What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models A Ettinger Transactions of the Association for Computational Linguistics 8, 34-48, 2020 | 643 | 2020 |
Faith and fate: Limits of transformers on compositionality N Dziri, X Lu, M Sclar, XL Li, L Jiang, BY Lin, S Welleck, P West, ... Advances in Neural Information Processing Systems 36, 2024 | 177 | 2024 |
Probing for semantic evidence of composition by means of simple classification tasks A Ettinger, A Elgohary, P Resnik Proceedings of the 1st workshop on evaluating vector-space representations …, 2016 | 153 | 2016 |
Towards linguistically generalizable NLP systems: A workshop and shared task A Ettinger, S Rao, H Daumé III, EM Bender arXiv preprint arXiv:1711.01505, 2017 | 97 | 2017 |
Assessing Composition in Sentence Vector Representations A Ettinger, A Elgohary, C Phillips, P Resnik Proceedings of the 27th International Conference on Computational …, 2018 | 89 | 2018 |
Assessing phrasal representation and composition in transformers L Yu, A Ettinger arXiv preprint arXiv:2010.03763, 2020 | 74 | 2020 |
The role of morphology in phoneme prediction: Evidence from MEG A Ettinger, T Linzen, A Marantz Brain and language 129, 14-23, 2014 | 66 | 2014 |
Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming K Misra, A Ettinger, JT Rayz arXiv preprint arXiv:2010.03010, 2020 | 55 | 2020 |
Learning to ignore: Long document coreference with bounded memory neural networks S Toshniwal, S Wiseman, A Ettinger, K Livescu, K Gimpel arXiv preprint arXiv:2010.02807, 2020 | 54 | 2020 |
Modeling N400 amplitude using vector space models of word representation A Ettinger, NH Feldman, P Resnik, C Phillips Proceedings of the 38th annual conference of the Cognitive Science Society …, 2016 | 50 | 2016 |
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words J Klafka, A Ettinger arXiv preprint arXiv:2005.01810, 2020 | 45 | 2020 |
Sorting through the noise: Testing robustness of information processing in pre-trained language models L Pandia, A Ettinger arXiv preprint arXiv:2109.12393, 2021 | 31 | 2021 |
Do language models learn typicality judgments from text? K Misra, A Ettinger, JT Rayz arXiv preprint arXiv:2105.02987, 2021 | 29 | 2021 |
Retrofitting sense-specific word vectors using parallel text A Ettinger, P Resnik, M Carpuat Proceedings of the 2016 Conference of the North American Chapter of the …, 2016 | 29 | 2016 |
Pragmatic competence of pre-trained language models through the lens of discourse connectives L Pandia, Y Cong, A Ettinger arXiv preprint arXiv:2109.12951, 2021 | 25 | 2021 |
Evaluating vector space models using human semantic priming results A Ettinger, T Linzen Proceedings of the 1st workshop on evaluating vector-space representations …, 2016 | 24 | 2016 |
COMPS: Conceptual minimal pair sentences for testing robust property knowledge and its inheritance in pre-trained language models K Misra, JT Rayz, A Ettinger arXiv preprint arXiv:2210.01963, 2022 | 23 | 2022 |
Dialogue focus tracking for zero pronoun resolution S Rao, A Ettinger, H Daumé III, P Resnik Proceedings of the 2015 Conference of the North American Chapter of the …, 2015 | 21 | 2015 |
Mandarin utterance-final particle ba (?) in the conversational scoreboard A Ettinger, SA Malamud Proceedings of Sinn und Bedeutung 19, 232-251, 2015 | 17 | 2015 |
A property induction framework for neural language models K Misra, JT Rayz, A Ettinger arXiv preprint arXiv:2205.06910, 2022 | 16 | 2022 |