Learning from noisy labels with deep neural networks: A survey

H Song, M Kim, D Park, Y Shin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Deep learning has achieved remarkable success in numerous domains with help from large
amounts of big data. However, the quality of data labels is a concern because of the lack of …

Better safe than sorry: Preventing delusive adversaries with adversarial training

L Tao, L Feng, J Yi, SJ Huang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by
slightly perturbing the features of correctly labeled training examples. By formalizing this …

Maximum mean discrepancy test is aware of adversarial attacks

R Gao, F Liu, J Zhang, B Han, T Liu… - International …, 2021 - proceedings.mlr.press
The maximum mean discrepancy (MMD) test could in principle detect any distributional
discrepancy between two datasets. However, it has been shown that the MMD test is …

Attack can benefit: An adversarial approach to recognizing facial expressions under noisy annotations

J Zheng, B Li, SC Zhang, S Wu, L Cao… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Abstract The real-world Facial Expression Recognition (FER) datasets usually exhibit
complex scenarios with coupled noise annotations and imbalanced classes distribution …

Adversarial training with complementary labels: on the benefit of gradually informative attacks

J Zhou, J Zhu, J Zhang, T Liu, G Niu… - Advances in …, 2022 - proceedings.neurips.cc
Adversarial training (AT) with imperfect supervision is significant but receives limited
attention. To push AT towards more practical scenarios, we explore a brand new yet …

Can adversarial training be manipulated by non-robust features?

L Tao, L Feng, H Wei, J Yi… - Advances in Neural …, 2022 - proceedings.neurips.cc
Adversarial training, originally designed to resist test-time adversarial examples, has shown
to be promising in mitigating training-time availability attacks. This defense ability, however …

Adversarial attack for uncertainty estimation: identifying critical regions in neural networks

I Alarab, S Prakoonwit - Neural Processing Letters, 2022 - Springer
We propose a novel method to capture data points near decision boundary in neural
network that are often referred to a specific type of uncertainty. In our approach, we sought to …

Accurate Forgetting for Heterogeneous Federated Continual Learning

A Wuerkaixi, S Cui, J Zhang, K Yan, B Han… - The Twelfth …, 2024 - openreview.net
Recent years have witnessed a burgeoning interest in federated learning (FL). However, the
contexts in which clients engage in sequential learning remain under-explored. Bridging FL …

A law of adversarial risk, interpolation, and label noise

D Paleka, A Sanyal - arXiv preprint arXiv:2207.03933, 2022 - arxiv.org
In supervised learning, it has been shown that label noise in the data can be interpolated
without penalties on test accuracy. We show that interpolating label noise induces …

Combining adversaries with anti-adversaries in training

X Zhou, N Yang, O Wu - Proceedings of the AAAI Conference on …, 2023 - ojs.aaai.org
Adversarial training is an effective learning technique to improve the robustness of deep
neural networks. In this study, the influence of adversarial training on deep learning models …