[PDF][PDF] Maximum mean discrepancy is aware of adversarial attacks

R Gao, F Liu, J Zhang, B Han, T Liu, G Niu… - arXiv preprint arXiv …, 2020 - researchgate.net
The maximum mean discrepancy (MMD) test, as a representative two-sample test, could in
principle detect any distributional discrepancy between two datasets. However, it has been …

Maximum mean discrepancy test is aware of adversarial attacks

R Gao, F Liu, J Zhang, B Han, T Liu… - International …, 2021 - proceedings.mlr.press
The maximum mean discrepancy (MMD) test could in principle detect any distributional
discrepancy between two datasets. However, it has been shown that the MMD test is …

Detecting adversarial data by probing multiple perturbations using expected perturbation score

S Zhang, F Liu, J Yang, Y Yang, C Li… - … on machine learning, 2023 - proceedings.mlr.press
Adversarial detection aims to determine whether a given sample is an adversarial one
based on the discrepancy between natural and adversarial distributions. Unfortunately …

The adversarial attack and detection under the fisher information metric

C Zhao, PT Fletcher, M Yu, Y Peng, G Zhang… - Proceedings of the …, 2019 - ojs.aaai.org
Many deep learning models are vulnerable to the adversarial attack, ie, imperceptible but
intentionally-designed perturbations to the input can cause incorrect output of the networks …

Detecting adversarial samples using influence functions and nearest neighbors

G Cohen, G Sapiro, R Giryes - Proceedings of the IEEE/CVF …, 2020 - openaccess.thecvf.com
Deep neural networks (DNNs) are notorious for their vulnerability to adversarial attacks,
which are small perturbations added to their input images to mislead their prediction …

Are odds really odd? bypassing statistical detection of adversarial examples

H Hosseini, S Kannan, R Poovendran - arXiv preprint arXiv:1907.12138, 2019 - arxiv.org
Deep learning classifiers are known to be vulnerable to adversarial examples. A recent
paper presented at ICML 2019 proposed a statistical test detection method based on the …

Identifying adversarially attackable and robust samples

V Raina, M Gales - arXiv preprint arXiv:2301.12896, 2023 - arxiv.org
Adversarial attacks insert small, imperceptible perturbations to input samples that cause
large, undesired changes to the output of deep learning models. Despite extensive research …

Detecting adversarial samples for deep neural networks through mutation testing

J Wang, J Sun, P Zhang, X Wang - arXiv preprint arXiv:1805.05010, 2018 - arxiv.org
Recently, it has been shown that deep neural networks (DNN) are subject to attacks through
adversarial samples. Adversarial samples are often crafted through adversarial perturbation …

Defending against adversarial attacks by leveraging an entire GAN

GK Santhanam, P Grnarova - arXiv preprint arXiv:1805.10652, 2018 - arxiv.org
Recent work has shown that state-of-the-art models are highly vulnerable to adversarial
perturbations of the input. We propose cowboy, an approach to detecting and defending …

Detecting adversarial examples using data manifolds

S Jha, U Jang, S Jha, B Jalaian - MILCOM 2018-2018 IEEE …, 2018 - ieeexplore.ieee.org
Models produced by machine learning, particularly deep neural networks, are state-of-the-
art for many machine learning tasks and demonstrate very high prediction accuracy …