Pipelining Split Learning in Multi-hop Edge Networks
Wei Wei, Zheng Lin, Tao Li, Xuanheng Li, Xianhao Chen
Published: 2025/5/7
Abstract
To support large-scale model training, split learning (SL) enables multiple edge devices/servers to share the intensive training workload. However, most existing works on SL focus solely on two-tier model splitting. Moreover, while some recent works have investigated the model splitting and placement problems for multi-hop SL, these solutions fail to overcome the resource idleness issue, resulting in significant network idle time. In this work, we propose a pipelined SL scheme by addressing the joint optimization problem of model splitting and placement (MSP) in multi-hop edge networks. By applying pipeline parallelism to SL, we identify that the MSP problem can be mapped to a problem of minimizing the weighted sum of a bottleneck cost function (min-max) and a linear cost function (min-sum). Based on graph theory, we devise a bottleneck-aware shortest-path algorithm to obtain the optimal solution. Besides, given the MSP outcomes, we also derive the closed-form solution to the micro-batch size in the pipeline. Finally, we develop an alternating optimization algorithm of MSP and micro-batch size to solve the joint optimization problem to minimize the end-to-end training latency. Extensive simulations have demonstrated the significant advantages of our algorithm compared to existing benchmarks without pipeline parallelism.