Fulcrum: Optimizing Concurrent DNN Training and Inferencing on Edge Accelerators

Prashanthi S. K., Saisamarth Taluri, Pranav Gupta, Amartya Ranjan Saikia, Kunal Kumar Sahoo, Atharva Vinay Joshi, Lakshya Karwa, Kedar Dhule, Yogesh Simmhan

Published: 2025/9/24

Abstract

The proliferation of GPU accelerated edge devices like Nvidia Jetsons and the rise in privacy concerns are placing an emphasis on concurrent DNN training and inferencing on edge devices. Inference and training have different computing and QoS goals. But edge accelerators like Jetson do not support native GPU sharing and expose 1000s of power modes. This requires careful time-sharing of concurrent workloads to meet power--performance goals, while limiting costly profiling. In this paper, we design an intelligent time-slicing approach for concurrent DNN training and inferencing on Jetsons. We formulate an optimization problem to interleave training and inferencing minibatches, and decide the device power mode and inference minibatch size, while maximizing the training throughput and staying within latency and power budgets, with modest profiling costs. We propose GMD, an efficient multi-dimensional gradient descent search which profiles just $15$ power modes; and ALS, an Active Learning technique which identifies reusable Pareto-optimal power modes, but profiles $50$--$150$ power modes. We evaluate these within our Fulcrum scheduler for $273,000+$ configurations across $15$ DNN workloads. We also evaluate our strategies on dynamic arrival inference and concurrent inferences. ALS and GMD outperform simpler and more complex baselines with larger-scale profiling. Their solutions satisfy the latency and power budget for $>97\%$ of our runs, and on average are within $7\%$ of the optimal throughput.

Fulcrum: Optimizing Concurrent DNN Training and Inferencing on Edge Accelerators | SummarXiv | SummarXiv