Closed-Form Interpretation of Neural Network Latent Spaces with Symbolic Gradients

Sebastian J. Wetzel, Zakaria Patel

公開日: 2024/9/9

Abstract

It has been demonstrated that artificial neural networks like autoencoders or Siamese networks encode meaningful concepts in their latent spaces. However, there does not exist a comprehensive framework for retrieving this information in a human-readable form without prior knowledge. In quantitative disciplines concepts are typically formulated as equations. Hence, in order to extract these concepts, we introduce a framework for finding closed-form interpretations of neurons in latent spaces of artificial neural networks. The interpretation framework is based on embedding trained neural networks into an equivalence class of functions that encode the same concept. We interpret these neural networks by finding an intersection between the equivalence class and human-readable equations defined by a symbolic search space. Computationally, this framework is based on finding a symbolic expression whose normalized gradients match the normalized gradients of a specific neuron with respect to the input variables. The effectiveness of our approach is demonstrated by retrieving invariants of matrices and conserved quantities of dynamical systems from latent spaces of Siamese neural networks.