Neural network-based singularity detection and applications

Nadiia Derevianko, Ioannis G. Kevrekidis, Felix Dietrich

公開日: 2025/9/12

Abstract

We present a method for constructing a special type of shallow neural network that learns univariate meromorphic functions with pole-type singularities. Our method is based on using a finite set of Laurent coefficients as input information, which we compute by FFT, employing values of the investigated function on some contour $\Gamma$ in the complex plane. The primary components of our methodology are the following: (1) the adaptive construction of rational polynomial activation functions, (2) a novel backpropagation-free method for determining the weights and biases of the hidden layer, and (3) the computation of the weights and biases of the output layer through least-squares fitting. Breaking with the idea of "safe" rational activation functions, we introduce a rational activation function as a meromorphic function with a single pole situated within the domain of investigation. Employing the weights and biases of the hidden layer, we then scale and shift the pole of the activation function to find the estimated locations of the singularities; this implies that the number of neurons in the hidden layer is determined by the number of singularities of the function that is being approximated. While the weights and biases of the hidden layer are tuned so as to capture the singularities, the least-squares fitting for the computation of weights and biases of the output layer ensures approximation of the function in the rest of the domain. Through the use of Laurent-Pad\'e rational approximation concepts, we prove locally uniform convergence of our method. We illustrate the effectiveness of our method through numerical experiments, including the construction of extensions of the time-dependent solutions of nonlinear autonomous PDEs into the complex plane, and study the dynamics of their singularities.