Fairness aware counterfactuals for subgroups

L Kavouras, K Tsopelas… - Advances in …, 2024 - proceedings.neurips.cc
In this work, we present Fairness Aware Counterfactuals for Subgroups (FACTS), a
framework for auditing subgroup fairness through counterfactual explanations. We start with …

On Explaining Unfairness: An Overview

C Fragkathoulas, V Papanikou… - 2024 IEEE 40th …, 2024 - ieeexplore.ieee.org
Algorithmic fairness and explainability are foundational elements for achieving responsible
AI. In this paper, we focus on their interplay, a research area that is recently receiving …

GCFExplainer: Global Counterfactual Explainer for Graph Neural Networks

M Kosan, Z Huang, S Medya, S Ranu… - ACM Transactions on …, 2024 - dl.acm.org
Graph neural networks (GNNs) find applications in various domains such as computational
biology, natural language processing, and computer security. Owing to their popularity, there …

GLANCE: Global Actions in a Nutshell for Counterfactual Explainability

L Kavouras, E Psaroudaki, K Tsopelas… - arXiv preprint arXiv …, 2024 - arxiv.org
The widespread deployment of machine learning systems in critical real-world decision-
making applications has highlighted the urgent need for counterfactual explainability …

Global Graph Counterfactual Explanation: A Subgraph Mapping Approach

Y He, W Zheng, Y Zhu, J Ma, S Mishra… - arXiv preprint arXiv …, 2024 - arxiv.org
Graph Neural Networks (GNNs) have been widely deployed in various real-world
applications. However, most GNNs are black-box models that lack explanations. One …

FreqX: What neural networks learn is what network designers say

Z Liu - arXiv preprint arXiv:2411.18343, 2024 - arxiv.org
Personalized Federal learning (PFL) allows clients to cooperatively train a personalized
model without disclosing their private dataset. However, PFL suffers from Non-IID …

FGCE: Feasible Group Counterfactual Explanations for Auditing Fairness

C Fragkathoulas, V Papanikou, E Pitoura… - arXiv preprint arXiv …, 2024 - arxiv.org
This paper introduces the first graph-based framework for generating group counterfactual
explanations to audit model fairness, a crucial aspect of trustworthy machine learning …

Refining Counterfactual Explanations With Joint-Distribution-Informed Shapley Towards Actionable Minimality

L You, Y Bian, L Cao - arXiv preprint arXiv:2410.05419, 2024 - arxiv.org
Counterfactual explanations (CE) identify data points that closely resemble the observed
data but produce different machine learning (ML) model outputs, offering critical insights into …

Unifying Perspectives: Plausible Counterfactual Explanations on Global, Group-wise, and Local Levels

P Wielopolski, O Furman, J Stefanowski… - arXiv preprint arXiv …, 2024 - arxiv.org
Growing regulatory and societal pressures demand increased transparency in AI,
particularly in understanding the decisions made by complex machine learning models …

Fairness and Explainability for Enabling Trust in AI Systems

D Sacharidis - A Human-Centered Perspective of Intelligent …, 2024 - Springer
This chapter discusses the ethical complications and challenges arising from the use of AI
systems in our everyday lives. It outlines recent and upcoming regulations and policies …