Scientific inference with interpretable machine learning: Analyzing models to learn about real-world phenomena

T Freiesleben, G König, C Molnar… - arXiv preprint arXiv …, 2022 - arxiv.org
Interpretable machine learning (IML) is concerned with the behavior and the properties of
machine learning models. Scientists, however, are only interested in models as a gateway to
understanding phenomena. Our work aligns these two perspectives and shows how to
design IML property descriptors. These descriptors are IML methods that provide insight not
just into the model, but also into the properties of the phenomenon the model is designed to
represent. We argue that IML is necessary for scientific inference with ML models because …

Scientific inference with interpretable machine learning: Analyzing models to learn about real-world phenomena

T Freiesleben, G König, C Molnar… - Minds and Machines, 2024 - Springer
To learn about real world phenomena, scientists have traditionally used models with clearly
interpretable elements. However, modern machine learning (ML) models, while powerful
predictors, lack this direct elementwise interpretability (eg neural network weights).
Interpretable machine learning (IML) offers a solution by analyzing models holistically to
derive interpretations. Yet, current IML research is focused on auditing ML models rather
than leveraging them for scientific inference. Our work bridges this gap, presenting a …
以上显示的是最相近的搜索结果。 查看全部搜索结果