Heterogeneous federated learning: State-of-the-art and research challenges

M Ye, X Fang, B Du, PC Yuen, D Tao - ACM Computing Surveys, 2023 - dl.acm.org
Federated learning (FL) has drawn increasing attention owing to its potential use in large-
scale industrial applications. Existing FL works mainly focus on model homogeneous …

Combining federated learning and edge computing toward ubiquitous intelligence in 6G network: Challenges, recent advances, and future directions

Q Duan, J Huang, S Hu, R Deng… - … Surveys & Tutorials, 2023 - ieeexplore.ieee.org
Full leverage of the huge volume of data generated on a large number of user devices for
providing intelligent services in the 6G network calls for Ubiquitous Intelligence (UI). A key to …

Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning

V Shejwalkar, A Houmansadr… - … IEEE Symposium on …, 2022 - ieeexplore.ieee.org
While recent works have indicated that federated learning (FL) may be vulnerable to
poisoning attacks by compromised clients, their real impact on production FL systems is not …

Backdoor learning: A survey

Y Li, Y Jiang, Z Li, ST Xia - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Backdoor attack intends to embed hidden backdoors into deep neural networks (DNNs), so
that the attacked models perform well on benign samples, whereas their predictions will be …

Privacy and robustness in federated learning: Attacks and defenses

L Lyu, H Yu, X Ma, C Chen, L Sun… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
As data are increasingly being stored in different silos and societies becoming more aware
of data privacy issues, the traditional centralized training of artificial intelligence (AI) models …

Mpaf: Model poisoning attacks to federated learning based on fake clients

X Cao, NZ Gong - … of the IEEE/CVF Conference on …, 2022 - openaccess.thecvf.com
Existing model poisoning attacks to federated learning assume that an attacker has access
to a large fraction of compromised genuine clients. However, such assumption is not realistic …

Flcert: Provably secure federated learning against poisoning attacks

X Cao, Z Zhang, J Jia, NZ Gong - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Due to its distributed nature, federated learning is vulnerable to poisoning attacks, in which
malicious clients poison the training process via manipulating their local training data and/or …

[HTML][HTML] Combined federated and split learning in edge computing for ubiquitous intelligence in internet of things: State-of-the-art and future directions

Q Duan, S Hu, R Deng, Z Lu - Sensors, 2022 - mdpi.com
Federated learning (FL) and split learning (SL) are two emerging collaborative learning
methods that may greatly facilitate ubiquitous intelligence in the Internet of Things (IoT) …

Rab: Provable robustness against backdoor attacks

M Weber, X Xu, B Karlaš, C Zhang… - 2023 IEEE Symposium …, 2023 - ieeexplore.ieee.org
Recent studies have shown that deep neural net-works (DNNs) are vulnerable to
adversarial attacks, including evasion and backdoor (poisoning) attacks. On the defense …

{ACORN}: input validation for secure aggregation

J Bell, A Gascón, T Lepoint, B Li, S Meiklejohn… - 32nd USENIX Security …, 2023 - usenix.org
Secure aggregation enables a server to learn the sum of client-held vectors in a privacy-
preserving way, and has been applied to distributed statistical analysis and machine …