A holistic review of machine learning adversarial attacks in IoT networks

H Khazane, M Ridouani, F Salahdine, N Kaabouch - Future Internet, 2024 - mdpi.com
With the rapid advancements and notable achievements across various application
domains, Machine Learning (ML) has become a vital element within the Internet of Things …

[HTML][HTML] Adversarial attack and defense: A survey

H Liang, E He, Y Zhao, Z Jia, H Li - Electronics, 2022 - mdpi.com
In recent years, artificial intelligence technology represented by deep learning has achieved
remarkable results in image recognition, semantic analysis, natural language processing …

Enhancing adversarial training with second-order statistics of weights

G Jin, X Yi, W Huang, S Schewe… - Proceedings of the …, 2022 - openaccess.thecvf.com
Adversarial training has been shown to be one of the most effective approaches to improve
the robustness of deep neural networks. It is formalized as a min-max optimization over …

Beyond preserved accuracy: Evaluating loyalty and robustness of BERT compression

C Xu, W Zhou, T Ge, K Xu, J McAuley, F Wei - arXiv preprint arXiv …, 2021 - arxiv.org
Recent studies on compression of pretrained language models (eg, BERT) usually use
preserved accuracy as the metric for evaluation. In this paper, we propose two new metrics …

Learning diverse-structured networks for adversarial robustness

X Du, J Zhang, B Han, T Liu, Y Rong… - International …, 2021 - proceedings.mlr.press
In adversarial training (AT), the main focus has been the objective and optimizer while the
model has been less studied, so that the models being used are still those classic ones in …

Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods

A Ma, Y Pan, A Farahmand - arXiv preprint arXiv:2308.06703, 2023 - arxiv.org
Stochastic gradient descent (SGD) and adaptive gradient methods, such as Adam and
RMSProp, have been widely used in training deep neural networks. We empirically show …

Adversarial parameter attack on deep neural networks

L Yu, Y Wang, XS Gao - International Conference on …, 2023 - proceedings.mlr.press
The parameter perturbation attack is a safety threat to deep learning, where small parameter
perturbations are made such that the attacked network gives wrong or desired labels of the …

Ensemble Adversarial Defense via Integration of Multiple Dispersed Low Curvature Models

K Zhao, X Chen, W Huang, L Ding, X Kong… - arXiv preprint arXiv …, 2024 - arxiv.org
The integration of an ensemble of deep learning models has been extensively explored to
enhance defense against adversarial attacks. The diversity among sub-models increases …

Robust and information-theoretically safe bias classifier against adversarial attacks

L Yu, XS Gao - arXiv preprint arXiv:2111.04404, 2021 - arxiv.org
In this paper, the bias classifier is introduced, that is, the bias part of a DNN with Relu as the
activation function is used as a classifier. The work is motivated by the fact that the bias part …

Relationship between prediction uncertainty and adversarial robustness

陈思宏, 沈浩靖, 王冉, 王熙照 - Journal of Software, 2022 - jos.org.cn
对抗鲁棒性指的是模型抵抗对抗样本的能力, 对抗训练是提高模型对抗鲁棒性的一种常用方法.
然而, 对抗训练会降低模型在干净样本上的准确率, 这种现象被称为 accuracy-robustness …