Dynamic Contextualized Word Embeddings V Hofmann, JB Pierrehumbert, H Schütze ACL, 2021 | 61 | 2021 |
Superbizarre Is Not Superb: Derivational Morphology Improves BERT’s Interpretation of Complex Words V Hofmann, JB Pierrehumbert, H Schütze ACL, 2021 | 57* | 2021 |
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ... ACL, 2024 | 50* | 2024 |
DagoBERT: Generating Derivational Morphology with a Pretrained Language Model V Hofmann, JB Pierrehumbert, H Schütze EMNLP, 2020 | 31 | 2020 |
An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers V Hofmann, H Schütze, J Pierrehumbert ACL, 2022 | 27 | 2022 |
The Better Your Syntax, the Better Your Semantics? Probing Pretrained Language Models for the English Comparative Correlative L Weissweiler, V Hofmann, A Köksal, H Schütze EMNLP, 2022 | 23 | 2022 |
Modeling Ideological Salience and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity V Hofmann, X Dong, J Pierrehumbert, H Schütze NAACL Findings, 2022 | 16* | 2022 |
Dialect Prejudice Predicts AI Decisions About People's Character, Employability, and Criminality V Hofmann, PR Kalluri, D Jurafsky, S King arXiv:2403.00742, 2024 | 13* | 2024 |
The Reddit Politosphere: A Large-Scale Text and Network Resource of Online Political Discourse V Hofmann, H Schütze, JB Pierrehumbert ICWSM, 2022 | 12 | 2022 |
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models P Röttger*, V Hofmann*, V Pyatkin, M Hinck, HR Kirk, H Schütze, D Hovy ACL, 2024 | 11 | 2024 |
A Graph Auto-encoder Model of Derivational Morphology V Hofmann, H Schütze, JB Pierrehumbert ACL, 2020 | 11 | 2020 |
Predicting the Growth of Morphological Families from Social and Linguistic Factors V Hofmann, JB Pierrehumbert, H Schütze ACL, 2020 | 11 | 2020 |
Geographic Adaptation of Pretrained Language Models V Hofmann, G Glavaš, N Ljubešić, JB Pierrehumbert, H Schütze TACL, 2024 | 9 | 2024 |
Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model L Weissweiler*, V Hofmann*, A Kantharuban, A Cai, R Dutt, A Hengle, ... EMNLP, 2023 | 9 | 2023 |
Paloma: A Benchmark for Evaluating Language Model Fit I Magnusson, A Bhagia, V Hofmann, L Soldaini, AH Jha, O Tafjord, ... arXiv:2312.10523, 2023 | 8* | 2023 |
Graph-enhanced Large Language Models in Asynchronous Plan Reasoning F Lin, E La Malfa, V Hofmann, EM Yang, A Cohn, JB Pierrehumbert ICML, 2024 | 2 | 2024 |
CaMEL: Case Marker Extraction without Labels L Weissweiler, V Hofmann, MJ Sabet, H Schütze ACL, 2022 | 2 | 2022 |
Computational investigations of derivational morphology V Hofmann University of Oxford, 2023 | | 2023 |
Explaining Pretrained Language Models' Understanding of Linguistic Structures Using Construction Grammar L Weissweiler, V Hofmann, A Köksal, H Schütze Frontiers in Artificial Intelligence, 2023 | | 2023 |
Unsupervised Detection of Contextualized Embedding Bias with Application to Ideology V Hofmann, J Pierrehumbert, H Schütze ICML, 2022 | | 2022 |