Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental …
Chemists can be skeptical in using deep learning (DL) in decision making, due to the lack of interpretability in “black-box” models. Explainable artificial intelligence (XAI) is a branch of …
External data sources are increasingly being used to train machine learning (ML) models as the data demand increases. However, the integration of external data into training poses …
Part-prototype Networks (ProtoPNets) are concept-based classifiers designed to achieve the same performance as black-box models without compromising transparency. ProtoPNets …
We present a novel rationale-centric framework with human-in-the-loop--Rationales-centric Double-robustness Learning (RDL)--to boost model out-of-distribution performance in few …
Despite being highly performant, deep neural networks might base their decisions on features that spuriously correlate with the provided labels, thus hurting generalization. To …
As machine learning models become larger, and are increasingly trained on large and uncurated datasets in weakly supervised mode, it becomes important to establish …
Identifying spurious correlations learned by a trained model is at the core of refining a trained model and building a trustworthy model. We present a simple method to identify …
D Zhang, M Williams, F Toni - Proceedings of the AAAI Conference on …, 2024 - ojs.aaai.org
Neural networks (NNs) can learn to rely on spurious signals in the training data, leading to poor generalisation. Recent methods tackle this problem by training NNs with additional …