Optimal linear prediction with functional observations: Why you can use a simple post-dimension reduction estimator

Won-Ki Seo

公開日: 2024/1/12

Abstract

We study the optimal linear prediction of a random function that takes values in an infinite dimensional Hilbert space. We begin by characterizing the mean square prediction error (MSPE) associated with a linear predictor and discussing the minimal achievable MSPE. This analysis reveals that, in general, there are multiple non-unique linear predictors that minimize the MSPE, and even if a unique solution exists, consistently estimating it from finite samples is generally impossible. Nevertheless, we can define asymptotically optimal linear operators whose empirical MSPEs approach the minimal achievable level as the sample size increases. We show that, interestingly, standard post-dimension reduction estimators, which have been widely used in the literature, attain such asymptotic optimality under minimal conditions.