Speeding Up the NSGA-II via Dynamic Population Sizes
Benjamin Doerr, Martin S. Krejca, Simon Wietheger
Published: 2025/9/1
Abstract
Multi-objective evolutionary algorithms (MOEAs) are among the most widely and successfully applied optimizers for multi-objective problems. However, to store many optimal trade-offs (the Pareto optima) at once, MOEAs are typically run with a large, static population of solution candidates, which can slow down the algorithm. We propose the dynamic NSGA-II (dNSGA-II), which is based on the popular NSGA-II and features a non-static population size. The dNSGA-II starts with a small initial population size of four and doubles it after a user-specified number $\tau$ of function evaluations, up to a maximum size of $\mu$. Via a mathematical runtime analysis, we prove that the dNSGA-II with parameters $\mu \geq 4(n + 1)$ and $\tau \geq \frac{256}{50} e n$ computes the full Pareto front of the \textsc{OneMinMax} benchmark of size $n$ in $O(\log(\mu) \tau + \mu \log(n))$ function evaluations, both in expectation and with high probability. For an optimal choice of $\mu$ and $\tau$, the resulting $O(n \log(n))$ runtime improves the optimal expected runtime of the classic NSGA-II by a factor of $\Theta(n)$. In addition, we show that the parameter $\tau$ can be removed when utilizing concurrent runs of the dNSGA-II. This approach leads to a mild slow-down by a factor of $O(\log(n))$ compared to an optimal choice of $\tau$ for the dNSGA-II, which is still a speed-up of $\Theta(n / \log(n))$ over the classic NSGA-II.