作者
Ling-Yuan Chen, Te-Chuan Chiu, Ai-Chun Pang, Li-Chen Cheng
发表日期
2021/12/7
研讨会论文
2021 IEEE Global Communications Conference (GLOBECOM)
页码范围
1-6
出版商
IEEE
简介
With the upcoming edge AI, federated learning (FL) is a privacy-preserving framework to meet the General Data Protection Regulation (GDPR). Unfortunately, FL is vulnerable to an up-to-date security threat, model poisoning attacks. By successfully replacing the global model with the targeted poisoned model, malicious end devices can trigger backdoor attacks and manipulate the whole learning process. The traditional researches under a homogeneous environment can ideally exclude the outliers with scarce side-effects on model performance. However, in privacy-preserving FL, each end device possibly owns a few data classes and different amounts of data, forming into a substantial heterogeneous environment where outliers could be malicious or benign. To achieve the system performance and robustness of FL's framework, we should not assertively remove any local model from the global model updating …
引用总数
学术搜索中的文章
LY Chen, TC Chiu, AC Pang, LC Cheng - 2021 IEEE Global Communications Conference …, 2021