The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data. In this work, we show that adversarial …
C Zhang, P Benz, T Imtiaz… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
A wide variety of works have explored the reason for the existence of adversarial examples, but there is no consensus on the explanation. We propose to treat the DNN logits as a vector …
Availability attacks, which poison the training data with imperceptible perturbations, can make the data not exploitable by machine learning algorithms so as to prevent unauthorized …
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples. By formalizing this …
The booming interest in adversarial attacks stems from a misalignment between human vision and a deep neural network (DNN),\ie~ a human imperceptible perturbation fools the …
Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial transferability: AEs generated for the source model can mislead another (target) model's …
Y Zhang, J Sang - Proceedings of the 28th ACM International …, 2020 - dl.acm.org
Machine learning fairness concerns about the biases towards certain protected or sensitive group of people when addressing the target tasks. This paper studies the debiasing problem …
We investigate how the population nonlinearities resulting from lateral inhibition and thresholding in sparse coding networks influence neural response selectivity and …
S Ai, ASV Koe, T Huang - Applied Soft Computing, 2021 - Elsevier
Recent works have demonstrated that current deep neural networks suffer from small but intentional perturbation during the testing phase of the model. Such perturbations aiming at …