SHAP-based explanation methods: a review for NLP interpretability

E Mosca, F Szigeti, S Tragianni… - Proceedings of the …, 2022 - aclanthology.org
Abstract Model explanations are crucial for the transparent, safe, and trustworthy
deployment of machine learning models. The SHapley Additive exPlanations (SHAP) …

A Benchmark Dataset to Distinguish Human-Written and Machine-Generated Scientific Papers

MHI Abdalla, S Malberg, D Dementieva, E Mosca… - Information, 2023 - mdpi.com
As generative NLP can now produce content nearly indistinguishable from human writing, it
is becoming difficult to identify genuine research contributions in academic writing and …

Uncovering trauma in genocide tribunals: An NLP approach using the Genocide Transcript Corpus

M Schirmer, IMO Nolasco, E Mosca, S Xu… - Proceedings of the …, 2023 - dl.acm.org
This paper applies Natural Language Processing (NLP) methods to analyze the exposure to
trauma experienced by witnesses in international criminal tribunals when testifying in court …

Explainable AI for the Human-Centric Development of NLP Models: Working Towards more Interpretable, Robust, and Controllable Models

E Mosca - 2024 - mediatum.ub.tum.de
Larger and more complex models have consistently raised the performance bar in most
Natural Language Processing (NLP) applications, exhibiting a growing presence in society …

Methods for the classification of data from open-ended questions in surveys

C Landesvatter - 2024 - madoc.bib.uni-mannheim.de
This dissertation investigates techniques for analyzing open-ended survey responses, which
are typically short and lack contextual information. Specialized methods, such as word …