Presolving Convexified Optimal Power Flow with Mixtures of Gradient Experts

Shourya Bose, Kejun Chen, Yu Zhang

Published: 2023/12/12

Abstract

Convex relaxations and approximations of the optimal power flow (OPF) problem have gained significant research and industrial interest for planning and operations in electric power networks. One approach for reducing their solve times is presolving which eliminates constraints from the problem definition, thereby reducing the burden of the underlying optimization algorithm. To this end, we propose a presolving framework for convexified optimal power flow (C-OPF) problems, which uses a novel deep learning-based architecture called MoGE (Mixture of Gradient Experts). In this framework, problem size is reduced by learning the mapping between C-OPF parameters and optimal dual variables (the latter being representable as gradients), which is then used to screen constraints that are non-binding at optimum. The validity of using this presolve framework across arbitrary families of C-OPF problems is theoretically demonstrated. We characterize generalization in MoGE and develop a post-solve recovery procedure to mitigate possible constraint classification errors. Using two different C-OPF models, we show via simulations that our framework reduces solve times by upto 34% across multiple PGLIB and MATPOWER test cases, while providing an identical solution as the full problem.

Presolving Convexified Optimal Power Flow with Mixtures of Gradient Experts | SummarXiv | SummarXiv