Solved in Unit Domain: JacobiNet for Differentiable Coordinate-Transformed PINNs
Xi Chen, Jianchuan Yang, Junjie Zhang, Runnan Yang, Xu Liu, Hong Wang, Tinghui Zheng, Ziyu Ren, Wenqi Hu
Published: 2025/8/4
Abstract
Physics-Informed Neural Networks offer a powerful framework for solving PDEs by embedding physical laws into the learning process. However, when applied to domains with irregular boundaries, PINNs often suffer from instability and slow convergence, which stems from (1) inconsistent normalization due to geometric anisotropy, (2) inaccurate boundary enforcements, and (3) imbalanced loss term competition. A common workaround is to map the domain to a regular space. Yet, conventional mapping methods rely on case-specific meshes, define Jacobians at pre-specified fixed nodes, reformulate PDEs via the chain rule-making them incompatible with modern automatic differentiation, tensor-based frameworks. To bridge this gap, we propose JacobiNet, a learning-based coordinate-transformed PINN framework that unifies domain mapping and PDE solving within an end-to-end differentiable architecture. Leveraging lightweight MLPs, JacobiNet learns continuous, differentiable mappings, enables direct Jacobian computation via autograd, shares computation graph with downstream PINNs. Its continuous nature and built-in Jacobian eliminate the need for meshing, explicit Jacobians computation/ storage, and PDE reformulation, while unlocking geometric-editing operations, reducing the mapping cost. Separating physical modeling from geometric complexity, JacobiNet (1) addresses normalization challenges in the original anisotropic coordinates, (2) facilitates hard constraints of boundary conditions, and (3) mitigates the long-standing imbalance among loss terms. Evaluated on various PDEs, JacobiNet reduces the L2 error from 0.11-0.73 to 0.01-0.09. In vessel-like domains with varying shapes, JacobiNet enables millisecond-level mapping inference for unseen geometries, improves prediction accuracy by an average of 3.65*, while delivering over 10* speed up-demonstrating strong generalization, accuracy, and efficiency.