作者
Arthur S Jacobs, Roman Beltiukov, Walter Willinger, Ronaldo A Ferreira, Arpit Gupta, Lisandro Z Granville
发表日期
2022/11/7
研讨会论文
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
页码范围
1537-1551
简介
Several recent research efforts have proposed Machine Learning (ML)-based solutions that can detect complex patterns in network traffic for a wide range of network security problems. However, without understanding how these black-box models are making their decisions, network operators are reluctant to trust and deploy them in their production settings. One key reason for this reluctance is that these models are prone to the problem of underspecification, defined here as the failure to specify a model in adequate detail. Not unique to the network security domain, this problem manifests itself in ML models that exhibit unexpectedly poor behavior when deployed in real-world settings and has prompted growing interest in developing interpretable ML solutions (e.g., decision trees) for "explaining'' to humans how a given black-box model makes its decisions. However, synthesizing such explainable models that …
引用总数
学术搜索中的文章
AS Jacobs, R Beltiukov, W Willinger, RA Ferreira… - Proceedings of the 2022 ACM SIGSAC Conference on …, 2022