Adaptsfl: Adaptive split federated learning in resource-constrained edge networks

Z Lin, G Qu, W Wei, X Chen, KK Leung - arXiv preprint arXiv:2403.13101, 2024 - arxiv.org
arXiv preprint arXiv:2403.13101, 2024arxiv.org
The increasing complexity of deep neural networks poses significant barriers to
democratizing them to resource-limited edge devices. To address this challenge, split
federated learning (SFL) has emerged as a promising solution by of floading the primary
training workload to a server via model partitioning while enabling parallel training among
edge devices. However, although system optimization substantially influences the
performance of SFL under resource-constrained systems, the problem remains largely …
The increasing complexity of deep neural networks poses significant barriers to democratizing them to resource-limited edge devices. To address this challenge, split federated learning (SFL) has emerged as a promising solution by of floading the primary training workload to a server via model partitioning while enabling parallel training among edge devices. However, although system optimization substantially influences the performance of SFL under resource-constrained systems, the problem remains largely uncharted. In this paper, we provide a convergence analysis of SFL which quantifies the impact of model splitting (MS) and client-side model aggregation (MA) on the learning performance, serving as a theoretical foundation. Then, we propose AdaptSFL, a novel resource-adaptive SFL framework, to expedite SFL under resource-constrained edge computing systems. Specifically, AdaptSFL adaptively controls client-side MA and MS to balance communication-computing latency and training convergence. Extensive simulations across various datasets validate that our proposed AdaptSFL framework takes considerably less time to achieve a target accuracy than benchmarks, demonstrating the effectiveness of the proposed strategies.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果