Content-based unrestricted adversarial attack

Z Chen, B Li, S Wu, K Jiang, S Ding… - Advances in Neural …, 2024 - proceedings.neurips.cc
Unrestricted adversarial attacks typically manipulate the semantic content of an image (eg,
color or texture) to create adversarial examples that are both effective and photorealistic …

Exploring architectural ingredients of adversarially robust deep neural networks

H Huang, Y Wang, S Erfani, Q Gu… - Advances in Neural …, 2021 - proceedings.neurips.cc
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks. A range of
defense methods have been proposed to train adversarially robust DNNs, among which …

Resilient machine learning for networked cyber physical systems: A survey for machine learning security to securing machine learning for CPS

FO Olowononi, DB Rawat, C Liu - … Communications Surveys & …, 2020 - ieeexplore.ieee.org
Cyber Physical Systems (CPS) are characterized by their ability to integrate the physical and
information or cyber worlds. Their deployment in critical infrastructure have demonstrated a …

Adversarial examples on graph data: Deep insights into attack and defense

H Wu, C Wang, Y Tyshetskiy, A Docherty, K Lu… - arXiv preprint arXiv …, 2019 - arxiv.org
Graph deep learning models, such as graph convolutional networks (GCN) achieve
remarkable performance for tasks on graph data. Similar to other types of deep models …

Ensemble adversarial training: Attacks and defenses

F Tramèr, A Kurakin, N Papernot, I Goodfellow… - arXiv preprint arXiv …, 2017 - arxiv.org
Adversarial examples are perturbed inputs designed to fool machine learning models.
Adversarial training injects such examples into training data to increase robustness. To …

Deepgauge: Multi-granularity testing criteria for deep learning systems

L Ma, F Juefei-Xu, F Zhang, J Sun, M Xue, B Li… - Proceedings of the 33rd …, 2018 - dl.acm.org
Deep learning (DL) defines a new data-driven programming paradigm that constructs the
internal system logic of a crafted neuron network through a set of training data. We have …

Robust overfitting may be mitigated by properly learned smoothening

T Chen, Z Zhang, S Liu, S Chang… - … Conference on Learning …, 2020 - openreview.net
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in
adversarially robust training of deep networks, and that appropriate early-stopping of …

Structure invariant transformation for better adversarial transferability

X Wang, Z Zhang, J Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Given the severe vulnerability of Deep Neural Networks (DNNs) against adversarial
examples, there is an urgent need for an effective adversarial attack to identify the …

Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

A self-supervised approach for adversarial robustness

M Naseer, S Khan, M Hayat… - Proceedings of the …, 2020 - openaccess.thecvf.com
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs)
based vision systems eg, for classification, segmentation and object detection. The …