Dynabench: Rethinking benchmarking in NLP D Kiela, M Bartolo, Y Nie, D Kaushik, A Geiger, Z Wu, B Vidgen, G Prasad, ... In Proceedings of the 2021 Conference of the North American Chapter of the …, 2021 | 324 | 2021 |
Causal abstractions of neural networks A Geiger, H Lu, T Icard, C Potts Advances in Neural Information Processing Systems 34, 9574-9586, 2021 | 130 | 2021 |
Neural natural language inference models partially embed theories of lexical entailment and negation A Geiger, K Richardson, C Potts In Proceedings of the Third BlackboxNLP Workshop on Analyzing and …, 2020 | 79 | 2020 |
DynaSent: A dynamic benchmark for sentiment analysis C Potts, Z Wu, A Geiger, D Kiela arXiv preprint arXiv:2012.15349, 2020 | 72 | 2020 |
Inducing causal structure for interpretable neural networks A Geiger, Z Wu, H Lu, J Rozner, E Kreiss, T Icard, N Goodman, C Potts International Conference on Machine Learning, 7324-7338, 2022 | 63 | 2022 |
Interpretability at scale: Identifying causal mechanisms in alpaca Z Wu, A Geiger, T Icard, C Potts, N Goodman Advances in Neural Information Processing Systems 36, 2024 | 52 | 2024 |
Finding alignments between interpretable causal variables and distributed neural representations A Geiger, Z Wu, C Potts, T Icard, N Goodman Causal Learning and Reasoning, 160-187, 2024 | 50 | 2024 |
Hybrid Pluggable Processing Pipeline (HyP3): A cloud-based infrastructure for generic processing of SAR data K Hogenson, SA Arko, B Buechler, R Hogenson, J Herrmann, A Geiger Agu fall meeting abstracts 2016, IN21B-1740, 2016 | 47 | 2016 |
Posing fair generalization tasks for natural language inference A Geiger, I Cases, L Karttunen, C Potts In Proceedings of the 2019 Conference on Empirical Methods in Natural …, 2019 | 46 | 2019 |
Causal abstraction for faithful model interpretation A Geiger, C Potts, T Icard arXiv preprint arXiv:2301.04709, 2023 | 38 | 2023 |
Cebab: Estimating the causal effects of real-world concepts on nlp model behavior ED Abraham, K D'Oosterlinck, A Feder, Y Gat, A Geiger, C Potts, ... Advances in Neural Information Processing Systems 35, 17582-17596, 2022 | 32 | 2022 |
Recursive routing networks: Learning to compose modules for language understanding I Cases, C Rosenbaum, M Riemer, A Geiger, T Klinger, A Tamkin, O Li, ... Proceedings of the 2019 Conference of the North American Chapter of the …, 2019 | 28 | 2019 |
Stress-testing neural models of natural language inference with multiply-quantified sentences A Geiger, I Cases, L Karttunen, C Potts arXiv preprint arXiv:1810.13033, 2018 | 27 | 2018 |
Causal proxy models for concept-based model explanations Z Wu, K D’Oosterlinck, A Geiger, A Zur, C Potts International conference on machine learning, 37313-37334, 2023 | 26 | 2023 |
Linear representations of sentiment in large language models C Tigges, OJ Hollinsworth, A Geiger, N Nanda arXiv preprint arXiv:2310.15154, 2023 | 21 | 2023 |
Causal distillation for language models Z Wu, A Geiger, J Rozner, E Kreiss, H Lu, T Icard, C Potts, ND Goodman arXiv preprint arXiv:2112.02505, 2021 | 20 | 2021 |
Rigorously assessing natural language explanations of neurons J Huang, A Geiger, K D'Oosterlinck, Z Wu, C Potts arXiv preprint arXiv:2309.10312, 2023 | 19 | 2023 |
Relational reasoning and generalization using nonsymbolic neural networks. A Geiger, A Carstensen, MC Frank, C Potts Psychological Review 130 (2), 308, 2023 | 18 | 2023 |
Reft: Representation finetuning for language models Z Wu, A Arora, Z Wang, A Geiger, D Jurafsky, CD Manning, C Potts arXiv preprint arXiv:2404.03592, 2024 | 10 | 2024 |
ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning J Selena She, C Potts, SR Bowman, A Geiger arXiv e-prints, arXiv: 2305.19426, 2023 | 9* | 2023 |