Responsible Artificial Intelligence (RAI) is widely considered as one of the greatest scientific challenges of our time and is key to increase the adoption of Artificial Intelligence (AI) …
Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting. In this work, we provide fundamental principles for interpretable ML, and …
Machine learning models in safety-critical settings like healthcare are often “blackboxes”: they contain a large number of parameters which are not transparent to users. Post-hoc …
S Studer, TB Bui, C Drescher, A Hanuschkin… - Machine learning and …, 2021 - mdpi.com
Machine learning is an established and frequently used technique in industry and academia, but a standard process model to improve success and efficiency of machine …
Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the …
R Hamon, H Junklewitz, I Sanchez… - IEEE Computational …, 2022 - ieeexplore.ieee.org
Can satisfactory explanations for complex machine learning models be achieved in high-risk automated decision-making? How can such explanations be integrated into a data …
B Brożek, M Furman, M Jakubiec… - Artificial Intelligence and …, 2024 - Springer
This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem …
Since its inception, the choice modelling field has been dominated by theory-driven modelling approaches. Machine learning offers an alternative data-driven approach for …
M Buiten, A De Streel, M Peitz - Computer Law & Security Review, 2023 - Elsevier
The employment of AI systems presents challenges for liability rules. This paper identifies these challenges and evaluates how liability rules should be adapted in response. The …