作者
Aditya Kuppa, Nhien-An Le-Khac
发表日期
2020/7/24
研讨会论文
IEEE International Joint Conference on Neural Networks (IJCNN)
简介
Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the …
引用总数
2019202020212022202320241116253411
学术搜索中的文章
A Kuppa, NA Le-Khac - 2020 International Joint Conference on neural …, 2020