Toward the third generation artificial intelligence

B Zhang, J Zhu, H Su - Science China Information Sciences, 2023 - Springer
There have been two competing paradigms in artificial intelligence (AI) development ever
since its birth in 1956, ie, symbolism and connectionism (or sub-symbolism). While …

Explainable ai: A review of machine learning interpretability methods

P Linardatos, V Papastefanopoulos, S Kotsiantis - Entropy, 2020 - mdpi.com
Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption,
with machine learning systems demonstrating superhuman performance in a significant …

On evaluating adversarial robustness of large vision-language models

Y Zhao, T Pang, C Du, X Yang, C Li… - Advances in …, 2024 - proceedings.neurips.cc
Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented
performance in response generation, especially with visual inputs, enabling more creative …

Improving robustness using generated data

S Gowal, SA Rebuffi, O Wiles… - Advances in …, 2021 - proceedings.neurips.cc
Recent work argues that robust training requires substantially larger datasets than those
required for standard classification. On CIFAR-10 and CIFAR-100, this translates into a …

Peco: Perceptual codebook for bert pre-training of vision transformers

X Dong, J Bao, T Zhang, D Chen, W Zhang… - Proceedings of the …, 2023 - ojs.aaai.org
This paper explores a better prediction target for BERT pre-training of vision transformers.
We observe that current prediction targets disagree with human perception judgment. This …

Data augmentation can improve robustness

SA Rebuffi, S Gowal, DA Calian… - Advances in …, 2021 - proceedings.neurips.cc
Adversarial training suffers from robust overfitting, a phenomenon where the robust test
accuracy starts to decrease during training. In this paper, we focus on reducing robust …

Enhancing the transferability of adversarial attacks through variance tuning

X Wang, K He - Proceedings of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples that mislead the models with
imperceptible perturbations. Though adversarial attacks have achieved incredible success …

Frequency domain model augmentation for adversarial attack

Y Long, Q Zhang, B Zeng, L Gao, X Liu, J Zhang… - European conference on …, 2022 - Springer
For black-box attacks, the gap between the substitute model and the victim model is usually
large, which manifests as a weak attack performance. Motivated by the observation that the …

LAS-AT: adversarial training with learnable attack strategy

X Jia, Y Zhang, B Wu, K Ma… - Proceedings of the …, 2022 - openaccess.thecvf.com
Adversarial training (AT) is always formulated as a minimax problem, of which the
performance depends on the inner optimization that involves the generation of adversarial …

Improving adversarial transferability via neuron attribution-based attacks

J Zhang, W Wu, J Huang, Y Huang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. It is thus
imperative to devise effective attack algorithms to identify the deficiencies of DNNs …