Highly robust factored principal component analysis for matrix-valued outlier accommodation and explainable detection via matrix minimum covariance determinant

Wenhui Wu, Changchun Shang, Jianhua Zhao, Xuan Ma, Yue Wang

Published: 2025/9/30

Abstract

Principal component analysis (PCA) is a classical and widely used method for dimensionality reduction, with applications in data compression, computer vision, pattern recognition, and signal processing. However, PCA is designed for vector-valued data and encounters two major challenges when applied to matrix-valued data with heavy-tailed distributions or outliers: (1) vectorization disrupts the intrinsic matrix structure, leading to information loss and the curse of dimensionality, and (2) PCA is highly sensitive to outliers. Factored PCA (FPCA) addresses the first issue through probabilistic modeling, using a matrix normal distribution that explicitly represents row and column covariances via a separable covariance structure, thereby preserving the two-way dependency and matrix form of the data. Building on FPCA, we propose highly robust FPCA (HRFPCA), a robust extension that replaces maximum likelihood estimators with the matrix minimum covariance determinant (MMCD) estimators. This modification enables HRFPCA to retain FPCA's ability to model matrix-valued data while achieving a breakdown point close to 50\%, substantially improving resistance to outliers. Furthermore, HRFPCA produces the score--orthogonal distance analysis (SODA) plot, which effectively visualizes and classifies matrix-valued outliers. Extensive simulations and real-data analyses demonstrate that HRFPCA consistently outperforms competing methods in robustness and outlier detection, underscoring its effectiveness and broad applicability.

Highly robust factored principal component analysis for matrix-valued outlier accommodation and explainable detection via matrix minimum covariance determinant | SummarXiv | SummarXiv