HADIS: Hybrid Adaptive Diffusion Model Serving for Efficient Text-to-Image Generation

Qizheng Yang, Tung-I Chen, Siyu Zhao, Ramesh K. Sitaraman, Hui Guan

公開日: 2025/8/31

Abstract

Text-to-image diffusion models have achieved remarkable visual quality but incur high computational costs, making real-time, scalable deployment challenging. Existing query-aware serving systems mitigate the cost by cascading lightweight and heavyweight models, but most rely on a fixed cascade configuration and route all prompts through an initial lightweight stage, wasting resources on complex queries. We present HADIS, a hybrid adaptive diffusion model serving system that jointly optimizes cascade model selection, query routing, and resource allocation. HADIS employs a rule-based prompt router to send clearly hard queries directly to heavyweight models, bypassing the overhead of the lightweight stage. To reduce the complexity of resource management, HADIS uses an offline profiling phase to produce a Pareto-optimal cascade configuration table. At runtime, HADIS selects the best cascade configuration and GPU allocation given latency and workload constraints. Empirical evaluations on real-world traces demonstrate that HADIS improves response quality by up to 35% while reducing latency violation rates by 2.7-45$\times$ compared to state-of-the-art model serving systems.

HADIS: Hybrid Adaptive Diffusion Model Serving for Efficient Text-to-Image Generation | SummarXiv | SummarXiv