Context and Motivation
Recent studies have highlighted transparency and explainability as important quality requirements of AI systems. However, there are still relatively few case studies that describe the current state of defining these quality requirements in practice.
Objective
This study consisted of two phases. The first goal of our study was to explore what ethical guidelines organizations have defined for the development of transparent and explainable AI systems and then we investigated how explainability requirements can be defined in practice.
Methods
In the first phase, we analyzed the ethical guidelines in 16 organizations representing different industries and public sector. Then, we conducted an empirical study to evaluate the results of the first phase with practitioners.
Results
The analysis of the ethical guidelines revealed that the importance of transparency is highlighted by almost all of the organizations and explainability is considered as an integral part of transparency. To support the definition of explainability requirements, we propose a model of explainability components for identifying explainability needs and a template for representing explainability requirements. The paper also describes the lessons we learned from applying the model and the template in practice.
Contribution
For researchers, this paper provides insights into what organizations consider important in the transparency and, in particular, explainability of AI systems. For practitioners, this study suggests a systematic and structured way to define explainability requirements of AI systems. Furthermore, the results emphasize a set of good practices that help to define the explainability of AI systems.