Projection-based model-order reduction via graph autoencoders suited for unstructured meshes
Liam K. Magargal, Parisa Khodabakhshi, Steven N. Rodriguez, Justin W. Jaworski, John G. Michopoulos
Published: 2024/7/18
Abstract
This paper presents the development of a graph autoencoder architecture capable of performing projection-based model-order reduction (PMOR) using a nonlinear manifold least-squares Petrov-Galerkin (LSPG) projection scheme. The architecture is particularly useful for advection-dominated flows modeled by unstructured meshes, as it provides a robust nonlinear mapping that can be leveraged in a PMOR setting. The presented graph autoencoder is constructed with a two-part process that consists of (1) generating a hierarchy of reduced graphs to emulate the compressive abilities of convolutional neural networks (CNNs) and (2) training a message passing operation at each step in the hierarchy of reduced graphs to emulate the filtering process of a CNN. The resulting framework provides improved flexibility over traditional CNN-based autoencoders because it is readily extendable to unstructured meshes. We provide an analysis of the interpretability of the graph autoencoder's latent state variables, where we find that the Jacobian of the decoder for the proposed graph autoencoder provides interpretable mode shapes akin to traditional proper orthogonal decomposition modes. To highlight the capabilities of the proposed framework, which is named geometric deep least-squares Petrov-Galerkin (GD-LSPG), we benchmark the method on a one-dimensional Burgers' model with a structured mesh and demonstrate the flexibility of GD-LSPG by deploying it on two test cases for two-dimensional Euler equations that use an unstructured mesh. The proposed framework is more flexible than using a traditional CNN-based autoencoder and provides considerable improvement in accuracy for very low-dimensional latent spaces in comparison with traditional affine projections.