Adversarial machine learning for network intrusion detection systems: A comprehensive survey

K He, DD Kim, MR Asghar - IEEE Communications Surveys & …, 2023 - ieeexplore.ieee.org
Network-based Intrusion Detection System (NIDS) forms the frontline defence against
network attacks that compromise the security of the data, systems, and networks. In recent …

Adversarial machine learning attacks against intrusion detection systems: A survey on strategies and defense

A Alotaibi, MA Rassam - Future Internet, 2023 - mdpi.com
Concerns about cybersecurity and attack methods have risen in the information age. Many
techniques are used to detect or deter attacks, such as intrusion detection systems (IDSs) …

Structure invariant transformation for better adversarial transferability

X Wang, Z Zhang, J Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Given the severe vulnerability of Deep Neural Networks (DNNs) against adversarial
examples, there is an urgent need for an effective adversarial attack to identify the …

Improving the transferability of adversarial samples by path-augmented method

J Zhang, J Huang, W Wang, Y Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks have achieved unprecedented success on diverse vision tasks.
However, they are vulnerable to adversarial noise that is imperceptible to humans. This …

How does information bottleneck help deep learning?

K Kawaguchi, Z Deng, X Ji… - … Conference on Machine …, 2023 - proceedings.mlr.press
Numerous deep learning algorithms have been inspired by and understood via the notion of
information bottleneck, where unnecessary information is (often implicitly) minimized while …

Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

A survey on learning to reject

XY Zhang, GS Xie, X Li, T Mei… - Proceedings of the IEEE, 2023 - ieeexplore.ieee.org
Learning to reject is a special kind of self-awareness (the ability to know what you do not
know), which is an essential factor for humans to become smarter. Although machine …

An adaptive model ensemble adversarial attack for boosting adversarial transferability

B Chen, J Yin, S Chen, B Chen… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
While the transferability property of adversarial examples allows the adversary to perform
black-box attacks ie, the attacker has no knowledge about the target model), the transfer …

Image shortcut squeezing: Countering perturbative availability poisons with compression

Z Liu, Z Zhao, M Larson - International conference on …, 2023 - proceedings.mlr.press
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use
for model training. Current research adopts the belief that practical and effective approaches …

Improving generalization of adversarial training via robust critical fine-tuning

K Zhu, X Hu, J Wang, X Xie… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Deep neural networks are susceptible to adversarial examples, posing a significant security
risk in critical applications. Adversarial Training (AT) is a well-established technique to …