作者
Cihat Keçeci, Mohammad Shaqfeh, Hayat Mbayed, Erchin Serpedin
发表日期
2022/7/17
期刊
arXiv preprint arXiv:2207.08147
简介
Federated learning enables many applications benefiting distributed and private datasets of a large number of potential data-holding clients. However, different clients usually have their own particular objectives in terms of the tasks to be learned from the data. So, supporting federated learning with meta-learning tools such as multi-task learning and transfer learning will help enlarge the set of potential applications of federated learning by letting clients of different but related tasks share task-agnostic models that can be then further updated and tailored by each individual client for its particular task. In a federated multi-task learning problem, the trained deep neural network model should be fine-tuned for the respective objective of each client while sharing some parameters for more generalizability. We propose to train a deep neural network model with more generalized layers closer to the input and more personalized layers to the output. We achieve that by introducing layer types such as pre-trained, common, task-specific, and personal layers. We provide simulation results to highlight particular scenarios in which meta-learning-based federated learning proves to be useful.
引用总数
学术搜索中的文章
C Keçeci, M Shaqfeh, H Mbayed, E Serpedin - arXiv preprint arXiv:2207.08147, 2022