Boosting adversarial training with hypersphere embedding

T Pang, X Yang, Y Dong, K Xu… - Advances in Neural …, 2020 - proceedings.neurips.cc
Adversarial training (AT) is one of the most effective defenses against adversarial attacks for
deep learning models. In this work, we advocate incorporating the hypersphere embedding …

The Path to Defence: A Roadmap to Characterising Data Poisoning Attacks on Victim Models

T Chaalan, S Pang, J Kamruzzaman, I Gondal… - ACM Computing …, 2024 - dl.acm.org
Data Poisoning Attacks (DPA) represent a sophisticated technique aimed at distorting the
training data of machine learning models, thereby manipulating their behavior. This process …

CLIP: Cheap Lipschitz training of neural networks

L Bungert, R Raab, T Roith, L Schwinn… - … Conference on Scale …, 2021 - Springer
Despite the large success of deep neural networks (DNN) in recent years, most neural
networks still lack mathematical guarantees in terms of stability. For instance, DNNs are …

Towards improving fast adversarial training in multi-exit network

S Chen, H Shen, R Wang, X Wang - Neural Networks, 2022 - Elsevier
Adversarial examples are usually generated by adding adversarial perturbations on clean
samples, designed to deceive the model to make wrong classifications. Adversarial …

Artificial intelligence methods for security and cyber security systems

RN Rudd-Orthner - 2022 - etheses.whiterose.ac.uk
This research is in threat analysis and countermeasures employing Artificial Intelligence (AI)
methods within the civilian domain, where safety and mission-critical aspects are essential …

Deep ConvNet: Non-random weight initialization for repeatable determinism, examined with FSGM

RNM Rudd-Orthner, L Mihaylova - Sensors, 2021 - mdpi.com
A repeatable and deterministic non-random weight initialization method in convolutional
layers of neural networks examined with the Fast Gradient Sign Method (FSGM). Using the …

Evaluation of Robustness Metrics for Defense of Machine Learning Systems

J DeMarchi, R Rijken, J Melrose… - 2023 International …, 2023 - ieeexplore.ieee.org
In this paper we explore some of the potential applications of robustness criteria for machine
learning (ML) systems by way of tangible “demonstrator” scenarios. In each demonstrator …

[图书][B] Towards Robust Models in Deep Learning: Regularizing Neural Networks and Generative Models

R Bao - 2021 - search.proquest.com
Deep neural networks are widely used in signal processing from a broad range of areas due
to their good performances, including computer vision, natural language processing …