Delivering trustworthy AI through formal XAI

J Marques-Silva, A Ignatiev - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the
need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and …

Argumentative XAI: a survey

K Čyras, A Rago, E Albini, P Baroni, F Toni - arXiv preprint arXiv …, 2021 - arxiv.org
Explainable AI (XAI) has been investigated for decades and, together with AI itself, has
witnessed unprecedented growth in recent years. Among various approaches to XAI …

Computational argumentation-based chatbots: a survey

F Castagna, N Kökciyan, I Sassoon, S Parsons… - Journal of Artificial …, 2024 - jair.org
Chatbots are conversational software applications designed to interact dialectically with
users for a plethora of different purposes. Surprisingly, these colloquial agents have only …

On tackling explanation redundancy in decision trees

Y Izza, A Ignatiev, J Marques-Silva - Journal of Artificial Intelligence …, 2022 - jair.org
Decision trees (DTs) epitomize the ideal of interpretability of machine learning (ML) models.
The interpretability of decision trees motivates explainability approaches by so-called …

Logic-based explainability in machine learning

J Marques-Silva - … Knowledge: 18th International Summer School 2022 …, 2023 - Springer
The last decade witnessed an ever-increasing stream of successes in Machine Learning
(ML). These successes offer clear evidence that ML is bound to become pervasive in a wide …

Argumentative explanations for interactive recommendations

A Rago, O Cocarascu, C Bechlivanidis, D Lagnado… - Artificial Intelligence, 2021 - Elsevier
A significant challenge for recommender systems (RSs), and in fact for AI systems in
general, is the systematic definition of explanations for outputs in such a way that both the …

No silver bullet: interpretable ML models must be explained

J Marques-Silva, A Ignatiev - Frontiers in artificial intelligence, 2023 - frontiersin.org
Recent years witnessed a number of proposals for the use of the so-called interpretable
models in specific application domains. These include high-risk, but also safety-critical …

When, What, and how should generative artificial intelligence explain to Users?

S Jang, H Lee, Y Kim, D Lee, J Shin, J Nam - Telematics and Informatics, 2024 - Elsevier
With the commercialization of ChatGPT, generative artificial intelligence (AI) has been
applied almost everywhere in our lives. However, even though generative AI has become a …

Conversational review-based explanations for recommender systems: Exploring users' query behavior

DC Hernandez-Bocanegra, J Ziegler - Proceedings of the 3rd …, 2021 - dl.acm.org
Providing explanations based on user reviews in recommender systems (RS) can increase
users' perception of system transparency. While static explanations are dominant, interactive …

Interactive explanations by conflict resolution via argumentative exchanges

A Rago, H Li, F Toni - arXiv preprint arXiv:2303.15022, 2023 - arxiv.org
As the field of explainable AI (XAI) is maturing, calls for interactive explanations for (the
outputs of) AI models are growing, but the state-of-the-art predominantly focuses on static …