In the field of artificial intelligence (AI), the quest to understand and model data-generating processes (DGPs) is of paramount importance. Deep generative models (DGMs) have …
Counterfactual explanations have shown promising results as a post-hoc framework to make image classifiers more explainable. In this paper, we propose DiME, a method allowing the …
Counterfactual explanations and adversarial attacks have a related goal: flipping output labels with minimal perturbations regardless of their characteristics. Yet, adversarial attacks …
Interpretability of neural networks aims at the development of models that can give information to the end-user about its inner workings and/or predictions, while keeping the …
With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic …
Explanation techniques that synthesize small, interpretable changes to a given image while producing desired changes in the model prediction have become popular for introspecting …
P Wang, N Vasconcelos - IEEE Transactions on Pattern …, 2023 - ieeexplore.ieee.org
Attribution-based explanations are popular in computer vision but of limited use for fine- grained classification problems typical of expert domains, where classes differ by subtle …
Counterfactual explanation is a common class of methods to make local explanations of machine learning decisions. For a given instance, these methods aim to find the smallest …
O Rotem, T Schwartz, R Maor, Y Tauber… - Nature …, 2024 - nature.com
The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep …