Augmax: Adversarial composition of random augmentations for robust training

H Wang, C Xiao, J Kossaifi, Z Yu… - Advances in neural …, 2021 - proceedings.neurips.cc
Data augmentation is a simple yet effective way to improve the robustness of deep neural
networks (DNNs). Diversity and hardness are two complementary dimensions of data …

Causaladv: Adversarial robustness through the lens of causality

Y Zhang, M Gong, T Liu, G Niu, X Tian, B Han… - arXiv preprint arXiv …, 2021 - arxiv.org
The adversarial vulnerability of deep neural networks has attracted significant attention in
machine learning. As causal reasoning has an instinct for modelling distribution change, it is …

Better safe than sorry: Preventing delusive adversaries with adversarial training

L Tao, L Feng, J Yi, SJ Huang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by
slightly perturbing the features of correctly labeled training examples. By formalizing this …

A deep learning approach for robust detection of bots in twitter using transformers

D Martín-Gutiérrez, G Hernández-Peñaloza… - IEEE …, 2021 - ieeexplore.ieee.org
During the last decades, the volume of multimedia content posted in social networks has
grown exponentially and such information is immediately propagated and consumed by a …

Probabilistic margins for instance reweighting in adversarial training

F Liu, B Han, T Liu, C Gong, G Niu… - Advances in …, 2021 - proceedings.neurips.cc
Reweighting adversarial data during training has been recently shown to improve
adversarial robustness, where data closer to the current decision boundaries are regarded …

Image-based scam detection method using an attention capsule network

L Bian, L Zhang, K Zhao, H Wang, S Gong - IEEE Access, 2021 - ieeexplore.ieee.org
In recent years, the rapid development of blockchain technology has attracted much
attention from people around the world. Scammers take advantage of the pseudo-anonymity …

Sparse and imperceptible adversarial attack via a homotopy algorithm

M Zhu, T Chen, Z Wang - International Conference on …, 2021 - proceedings.mlr.press
Sparse adversarial attacks can fool deep neural networks (DNNs) by only perturbing a few
pixels (regularized by $\ell_0 $ norm). Recent efforts combine it with another $\ell_\infty …

Learning to generate visual questions with noisy supervision

S Kai, L Wu, S Tang, Y Zhuang… - Advances in …, 2021 - proceedings.neurips.cc
The task of visual question generation (VQG) aims to generate human-like neural questions
from an image and potentially other side information (eg, answer type or the answer itself) …

Meta two-sample testing: Learning kernels for testing with limited data

F Liu, W Xu, J Lu, DJ Sutherland - Advances in Neural …, 2021 - proceedings.neurips.cc
Modern kernel-based two-sample tests have shown great success in distinguishing
complex, high-dimensional distributions by learning appropriate kernels (or, as a special …

Efficient statistical tests: A neural tangent kernel approach

S Jia, E Nezhadarya, Y Wu… - … Conference on Machine …, 2021 - proceedings.mlr.press
For machine learning models to make reliable predictions in deployment, one needs to
ensure the previously unknown test samples need to be sufficiently similar to the training …