Key Considerations for Auto-Scaling: Lessons from Benchmark Microservices
Majid Dashtbani, Ladan Tahvildari
公開日: 2025/10/2
Abstract
Microservices have become the dominant architectural paradigm for building scalable and modular cloud-native systems. However, achieving effective auto-scaling in such systems remains a non-trivial challenge, as it depends not only on advanced scaling techniques but also on sound design, implementation, and deployment practices. Yet, these foundational aspects are often overlooked in existing benchmarks, making it difficult to evaluate autoscaling methods under realistic conditions. In this paper, we identify a set of practical auto-scaling considerations by applying several state-of-the-art autoscaling methods to widely used microservice benchmarks. To structure these findings, we classify the issues based on when they arise during the software lifecycle: Architecture, Implementation, and Deployment. The Architecture phase covers high-level decisions such as service decomposition and inter-service dependencies. The Implementation phase includes aspects like initialization overhead, metrics instrumentation, and error propagation. The Deployment phase focuses on runtime configurations such as resource limits and health checks. We validate these considerations using the Sock-Shop benchmark and evaluate diverse auto-scaling strategies, including threshold-based, control-theoretic, learning-based, black-box optimization, and dependency-aware approaches. Our findings show that overlooking key lifecycle concerns can degrade autoscaler performance, while addressing them leads to more stable and efficient scaling. These results underscore the importance of lifecycle-aware engineering for unlocking the full potential of auto-scaling in microservice-based systems.