Ontologies to build a predictive architecture to classify and explain

M Bellucci, N Delestre, N Malandain… - DeepOntoNLP …, 2022 - hal.science
DeepOntoNLP Workshop@ ESWC 2022, 2022hal.science
Explainable AI is gaining traction because of the widespread use of black box models in the
industry. Many explanation methods are proposed to explain models without impacting their
design. The literature describes a new architecture where an explainable model interacts
with an explanation interface to generate explanations tailored for a user. We propose a
novel image classification system that combines an ontology with machine learning models
based on this architecture. It uses an ontology to add different labels to the same dataset …
Explainable AI is gaining traction because of the widespread use of black box models in the industry. Many explanation methods are proposed to explain models without impacting their design. The literature describes a new architecture where an explainable model interacts with an explanation interface to generate explanations tailored for a user. We propose a novel image classification system that combines an ontology with machine learning models based on this architecture. It uses an ontology to add different labels to the same dataset and generates machine learning models to assess the class of an object and its different properties listed in the ontology. The outputs of these models are added to the ontology to verify that these predictions are consistent, using logical reasoning. The ontology can then be explored to understand the prediction and why it is consistent or not. This system can warn the user when a prediction is uncertain, which will help users to trust it.
hal.science
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References