Measurably stronger explanation reliability via model canonization

F Motzkus, L Weber… - 2022 IEEE International …, 2022 - ieeexplore.ieee.org
While rule-based attribution methods have proven useful for providing local explanations for
Deep Neural Networks, explaining modern and more varied network architectures yields …

Measurably Stronger Explanation Reliability via Model Canonization

F Motzkus, L Weber, S Lapuschkin - arXiv e-prints, 2022 - ui.adsabs.harvard.edu
While rule-based attribution methods have proven useful for providing local explanations for
Deep Neural Networks, explaining modern and more varied network architectures yields …

Measurably Stronger Explanation Reliability Via Model Canonization

F Motzkus, L Weber, S Lapuschkin - ICIP, 2022 - openreview.net
While rule-based attribution methods have proven useful for providing local explanations for
Deep Neural Networks, explaining modern and more varied network architectures yields …

Measurably Stronger Explanation Reliability via Model Canonization

F Motzkus, L Weber, S Lapuschkin - arXiv preprint arXiv:2202.06621, 2022 - arxiv.org
While rule-based attribution methods have proven useful for providing local explanations for
Deep Neural Networks, explaining modern and more varied network architectures yields …

Measurably Stronger Explanation Reliability Via Model Canonization

F Motzkus, L Weber, SR Lapuschkin - 2022 - publica.fraunhofer.de
While rule-based attribution methods have proven useful for providing local explanations for
Deep Neural Networks, explaining modern and more varied network architectures yields …

Measurably Stronger Explanation Reliability via Model Canonization

F Motzkus, L Weber, S Lapuschkin - CoRR, 2022 - openreview.net
While rule-based attribution methods have proven useful for providing local explanations for
Deep Neural Networks, explaining modern and more varied network architectures yields …