A Modern Theory of Cross-Validation through the Lens of Stability

Jing Lei

Published: 2025/5/29

Abstract

Modern data analysis and statistical learning are marked by complex data structures and black-box algorithms. Data complexity stems from technologies like imaging, remote sensing, wearables, and genomic sequencing. Simultaneously, black-box models -- especially deep neural networks -- have achieved impressive results. This combination raises new challenges for uncertainty quantification and statistical inference, which we term "black-box inference." Black-box inference is difficult due to the lack of traditional modeling assumptions and the opaque behavior of modern estimators. These make it hard to characterize the distribution of estimation errors. A popular solution is post-hoc randomization, which, under mild assumptions like exchangeability, can yield valid uncertainty quantification. Such methods range from classical techniques like permutation tests, jackknife, and bootstrap, to recent innovations like conformal inference. These approaches typically need little knowledge of data distributions or the internal working of estimators. Many rely on the idea that estimators behave similarly under small data changes -- a concept formalized as stability. Over time, stability has become a key principle in data science, influencing generalization error, privacy, and adaptive inference. This article investigates cross-validation (CV) -- a widely used resampling method -- through the lens of stability. We first review recent theoretical results on CV for estimating generalization error and model selection under stability. We then examine uncertainty quantification for CV-based risk estimates. Together, these insights yield new theory and tools, which we apply to topics like model selection, selective inference, and conformal prediction.