Effects of Un targeted Adversarial Attacks on Deep Learning Methods

E Degirmenci, I Ozcelik, A Yazici - 2022 15th International …, 2022 - ieeexplore.ieee.org
2022 15th International Conference on Information Security and …, 2022ieeexplore.ieee.org
The increasing connectivity of smart systems opens up new security vulnerabilities. Since
smart systems are used in various sectors such as healthcare, smart cities, and the
intelligent industries, security becomes a fundamental concern. For this reason, security
measurements are often added to security systems, such as authorization, authentication,
encryption, Intrusion Prevention System (IPS) and Intrusion Detection Systems (IDS).
However, these systems are still vulnerable to attacks such as DoS, DDoS. Recently, Deep …
The increasing connectivity of smart systems opens up new security vulnerabilities. Since smart systems are used in various sectors such as healthcare, smart cities, and the intelligent industries, security becomes a fundamental concern. For this reason, security measurements are often added to security systems, such as authorization, authentication, encryption, Intrusion Prevention System (IPS) and Intrusion Detection Systems (IDS). However, these systems are still vulnerable to attacks such as DoS, DDoS. Recently, Deep Learning methods have been used to detect these attacks as early as possible. This method, however, has its own vulnerabilities to adversarial attacks. This paper presents three types of adversarial effects on the deep learning models analysis. In this study we use the open dataset CICIDS2017. Initially, DDoS attacks are detected using deep learning methods. Then, adversarial examples are created using adversarial attack methods. The untargeted adversarial attacks are used in this study, these being; FGSM, PGD and BIM. The results show that all adversarial attacks are effective on Deep Learning models. However, the results show that attacks from PGD and BIM are more effective. In addition, the deep learning models training is evaluated with K-fold cross-validation. The results show that, deep learning models that occur during each fold could be get different accuracy results against adversarial attacks. Adversarial attacks may be used in a K-fold cross validation process as a parameter for best model selection.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References