H Liang, E He, Y Zhao, Z Jia, H Li - Electronics, 2022 - mdpi.com
In recent years, artificial intelligence technology represented by deep learning has achieved remarkable results in image recognition, semantic analysis, natural language processing …
Adversarial training has been shown to be one of the most effective approaches to improve the robustness of deep neural networks. It is formalized as a min-max optimization over …
Recent studies on compression of pretrained language models (eg, BERT) usually use preserved accuracy as the metric for evaluation. In this paper, we propose two new metrics …
In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in …
Stochastic gradient descent (SGD) and adaptive gradient methods, such as Adam and RMSProp, have been widely used in training deep neural networks. We empirically show …
L Yu, Y Wang, XS Gao - International Conference on …, 2023 - proceedings.mlr.press
The parameter perturbation attack is a safety threat to deep learning, where small parameter perturbations are made such that the attacked network gives wrong or desired labels of the …
K Zhao, X Chen, W Huang, L Ding, X Kong… - arXiv preprint arXiv …, 2024 - arxiv.org
The integration of an ensemble of deep learning models has been extensively explored to enhance defense against adversarial attacks. The diversity among sub-models increases …
In this paper, the bias classifier is introduced, that is, the bias part of a DNN with Relu as the activation function is used as a classifier. The work is motivated by the fact that the bias part …