The maximum mean discrepancy (MMD) test could in principle detect any distributional discrepancy between two datasets. However, it has been shown that the MMD test is …
S Zhang, F Liu, J Yang, Y Yang, C Li… - … on machine learning, 2023 - proceedings.mlr.press
Adversarial detection aims to determine whether a given sample is an adversarial one based on the discrepancy between natural and adversarial distributions. Unfortunately …
Many deep learning models are vulnerable to the adversarial attack, ie, imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks …
Deep neural networks (DNNs) are notorious for their vulnerability to adversarial attacks, which are small perturbations added to their input images to mislead their prediction …
Deep learning classifiers are known to be vulnerable to adversarial examples. A recent paper presented at ICML 2019 proposed a statistical test detection method based on the …
V Raina, M Gales - arXiv preprint arXiv:2301.12896, 2023 - arxiv.org
Adversarial attacks insert small, imperceptible perturbations to input samples that cause large, undesired changes to the output of deep learning models. Despite extensive research …
Recently, it has been shown that deep neural networks (DNN) are subject to attacks through adversarial samples. Adversarial samples are often crafted through adversarial perturbation …
Recent work has shown that state-of-the-art models are highly vulnerable to adversarial perturbations of the input. We propose cowboy, an approach to detecting and defending …
Models produced by machine learning, particularly deep neural networks, are state-of-the- art for many machine learning tasks and demonstrate very high prediction accuracy …