Reviewing federated learning aggregation algorithms; strategies, contributions, limitations and future perspectives

M Moshawrab, M Adda, A Bouzouane, H Ibrahim… - Electronics, 2023 - mdpi.com
The success of machine learning (ML) techniques in the formerly difficult areas of data
analysis and pattern extraction has led to their widespread incorporation into various …

Adaptive differential privacy in vertical federated learning for mobility forecasting

FZ Errounda, Y Liu - Future Generation Computer Systems, 2023 - Elsevier
Differential privacy is the de-facto technique for protecting the individuals in the training
dataset and the learning models in deep learning. However, the technique presents two …

Unveiling vulnerabilities in deep learning-based malware detection: Differential privacy driven adversarial attacks

R Taheri, M Shojafar, F Arabikhan, A Gegov - Computers & Security, 2024 - Elsevier
The exponential increase of Android malware creates a severe threat, motivating the
development of machine learning and especially deep learning-based classifiers to detect …

The Path to Defence: A Roadmap to Characterising Data Poisoning Attacks on Victim Models

T Chaalan, S Pang, J Kamruzzaman, I Gondal… - ACM Computing …, 2024 - dl.acm.org
Data Poisoning Attacks (DPA) represent a sophisticated technique aimed at distorting the
training data of machine learning models, thereby manipulating their behavior. This process …

Sok: Unintended interactions among machine learning defenses and risks

V Duddu, S Szyller, N Asokan - 2024 IEEE Symposium on …, 2024 - ieeexplore.ieee.org
Machine learning (ML) models cannot neglect risks to security, privacy, and fairness.
Several defenses have been proposed to mitigate such risks. When a defense is effective in …

An adversarial perspective on accuracy, robustness, fairness, and privacy: multilateral-tradeoffs in trustworthy ML

A Gittens, B Yener, M Yung - IEEE Access, 2022 - ieeexplore.ieee.org
Model accuracy is the traditional metric employed in machine learning (ML) applications.
However, privacy, fairness, and robustness guarantees are crucial as ML algorithms …

Threshold Switching Memristor-Based Radial-Based Spiking Neuron Circuit for Conversion Based Spiking Neural Networks Adversarial Attack Improvement

Z Wu, W Li, J Zou, Z Feng, T Chen… - … on Circuits and …, 2023 - ieeexplore.ieee.org
The analog neural network to spiking neural network (ANN-to-SNN) conversion is an
effective method for improving the performance of SNNs. However, the existing mainstream …

Differentially private optimizers can learn adversarially robust models

Z Bu, Y Zhang - Transactions on Machine Learning Research, 2023 - openreview.net
Machine learning models have shone in a variety of domains and attracted increasing
attention from both the security and the privacy communities. One important yet worrying …

Augment then smooth: Reconciling differential privacy with certified robustness

J Wu, AA Ghomi, D Glukhov, JC Cresswell… - arXiv preprint arXiv …, 2023 - arxiv.org
Machine learning models are susceptible to a variety of attacks that can erode trust in their
deployment. These threats include attacks against the privacy of training data and …

[HTML][HTML] A Gradual Adversarial Training Method for Semantic Segmentation

Y Zan, P Lu, T Meng - Remote Sensing, 2024 - mdpi.com
Deep neural networks (DNNs) have achieved great success in various computer vision
tasks. However, they are susceptible to artificially designed adversarial perturbations, which …