As machine learning methods see greater adoption and implementation in high stakes applications such as medical image diagnosis, the need for model interpretability and …
Counterfactual explanations, which deal with “why not?” scenarios, can provide insightful explanations to an AI agent's behavior [Miller [38]]. In this work, we focus on generating …
Multivariate time series are used in many science and engineering domains, including health-care, astronomy, and high-performance computing. A recent trend is to use machine …
Measuring algorithmic bias is crucial both to assess algorithmic fairness and to guide the improvement of algorithms. Current bias measurement methods in computer vision are …
C Lovering, E Pavlick - Transactions of the Association for …, 2022 - direct.mit.edu
Many complex problems are naturally understood in terms of symbolic concepts. For example, our concept of “cat” is related to our concepts of “ears” and “whiskers” in a non …
Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for …
Abstract Vision-and-Language Navigation (VLN) is a task where agents must decide how to move through a 3D environment to reach a goal by grounding natural language instructions …
Explaining deep learning model inferences is a promising venue for scientific understanding, improving safety, uncovering hidden biases, evaluating fairness, and …
Broad-XAI moves away from interpreting individual decisions based on a single datum and aims to provide integrated explanations from multiple machine learning algorithms into a …