Abstract Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system's entire life cycle …
We introduce a novel model-agnostic post-hoc Explainable AI method that provides meaningful interpretations for hidden neuron activations in a Convolutional Neural Network …
A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would provide insights into the question of what a deep learning …
Abstract Explainable Artificial Intelligence (XAI) poses a significant challenge in providing transparent and understandable insights into complex AI models. Traditional post-hoc …
Identification of threatening comments on social media platforms has recently gained attention. Prior approaches have addressed this task in some low-resource languages but …
Knowledge Graphs (KG) are the backbone of many data-intensive applications since they can represent data coupled with its meaning and context. Aligning KGs across different …
Recent advances in AI--including generative approaches--have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt …
AW Wibowo, E Sato-Shimokawara - 2024 Joint 13th …, 2024 - ieeexplore.ieee.org
Recently, studies related to human activity recognition have developed which have been applied in various fields. In that field Machine learning and deep learning techniques have …
In this study, the capability of Large Language Models (LLMs) is explored to automate Concept Induction, a process traditionally reliant on formal logical reasoning using …