Superfast Low Rank Approximation

Soo Go, Qi Luan, Victor Y. Pan, John Svadlenka, Liang Zhao

Published: 2018/12/29

Abstract

Low rank approximation of a matrix (LRA) is a highly important area of Numerical Linear and Multilinear Algebra and Data Mining and Analysis. One can operate with an LRA superfast -- by using much fewer memory cells and flops than an input matrix has entries. Can we, however, compute an LRA of a matrix superfast? YES and NO. For worst case inputs, any LRA algorithm fails miserably unless it involves all input entries, but in computational practice worst case inputs seem to appear rarely, and accurate LRA are routinely computed superfast for large and important classes of matrices, in particular in the memory efficient form of CUR, widely used in data analysis. We advance formal study of this YES and NO coexistence by proving novel universal upper bounds on the spectral output error norms of all CUR LRA algorithms and, under a fixed probabilistic structure in the space of input matrices, on both spectral and Frobenius error norms of nearly all sketching LRA algorithms. These bounds imply that superfast LRA algorithms of the two kinds fail miserably only for a very narrow input class. Furthermore, in our numerical tests such superfast algorithms were consistently much more accurate than our upper estimates ensure and usually were reasonably close to optimal.

Superfast Low Rank Approximation | SummarXiv | SummarXiv