Deep Learning as the Disciplined Construction of Tame Objects
Gilles Bareilles, Allen Gehret, Johannes Aspman, Jana Lepšová, Jakub Mareček
Published: 2025/9/22
Abstract
One can see deep-learning models as compositions of functions within the so-called tame geometry. In this expository note, we give an overview of some topics at the interface of tame geometry (also known as o-minimality), optimization theory, and deep learning theory and practice. To do so, we gradually introduce the concepts and tools used to build convergence guarantees for stochastic gradient descent in a general nonsmooth nonconvex, but tame, setting. This illustrates some ways in which tame geometry is a natural mathematical framework for the study of AI systems, especially within Deep Learning.