Edge artificial intelligence for 6G: Vision, enabling technologies, and applications

KB Letaief, Y Shi, J Lu, J Lu - IEEE Journal on Selected Areas …, 2021 - ieeexplore.ieee.org
The thriving of artificial intelligence (AI) applications is driving the further evolution of
wireless networks. It has been envisioned that 6G will be transformative and will …

Attack of the tails: Yes, you really can backdoor federated learning

H Wang, K Sreenivasan, S Rajput… - Advances in …, 2020 - proceedings.neurips.cc
Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in
the form of backdoors during training. The goal of a backdoor is to corrupt the performance …

Privacy and robustness in federated learning: Attacks and defenses

L Lyu, H Yu, X Ma, C Chen, L Sun… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
As data are increasingly being stored in different silos and societies becoming more aware
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …

How to backdoor federated learning

E Bagdasaryan, A Veit, Y Hua… - International …, 2020 - proceedings.mlr.press
Federated models are created by aggregating model updates submittedby participants. To
protect confidentiality of the training data, the aggregator by design has no visibility into how …

Salvaging federated learning by local adaptation

T Yu, E Bagdasaryan, V Shmatikov - arXiv preprint arXiv:2002.04758, 2020 - arxiv.org
Federated learning (FL) is a heavily promoted approach for training ML models on sensitive
data, eg, text typed by users on their smartphones. FL is expressly designed for training on …

Byzantine-robust federated machine learning through adaptive model averaging

L Muñoz-González, KT Co, EC Lupu - arXiv preprint arXiv:1909.05125, 2019 - arxiv.org
Federated learning enables training collaborative machine learning models at scale with
many participants whilst preserving the privacy of their datasets. Standard federated …

Byzantine-robust learning on heterogeneous datasets via bucketing

SP Karimireddy, L He, M Jaggi - arXiv preprint arXiv:2006.09365, 2020 - arxiv.org
In Byzantine robust distributed or federated learning, a central server wants to train a
machine learning model over data distributed across multiple workers. However, a fraction …

DETOX: A redundancy-based framework for faster and more robust gradient aggregation

S Rajput, H Wang, Z Charles… - Advances in Neural …, 2019 - proceedings.neurips.cc
To improve the resilience of distributed training to worst-case, or Byzantine node failures,
several recent methods have replaced gradient averaging with robust aggregation methods …

Federated graph neural network for fast anomaly detection in controller area networks

H Zhang, K Zeng, S Lin - IEEE Transactions on Information …, 2023 - ieeexplore.ieee.org
Due to the lack of CAN frame encryption and authentication, CAN bus is vulnerable to
various attacks, which can in general be divided into message injection, suspension, and …

Variance reduction is an antidote to byzantines: Better rates, weaker assumptions and communication compression as a cherry on the top

E Gorbunov, S Horváth, P Richtárik, G Gidel - arXiv preprint arXiv …, 2022 - arxiv.org
Byzantine-robustness has been gaining a lot of attention due to the growth of the interest in
collaborative and federated learning. However, many fruitful directions, such as the usage of …