Chunked TabPFN: Exact Training-Free In-Context Learning for Long-Context Tabular Data
Renat Sergazinov, Shao-An Yin
Published: 2025/8/30
Abstract
TabPFN v2 achieves better results than tree-based models on several tabular benchmarks, which is notable since tree-based models are usually the strongest choice for tabular data. However, it cannot handle more than 10K context tokens because transformers have quadratic computation and memory costs. Unlike existing approaches that rely on context compression, such as selecting representative samples via K-nearest neighbors (KNN), we introduce a \textbf{tiled-block} strategy to compute attention within the TabPFN framework. This design is compatible with standard GPU setups and, to the best of our knowledge, is the first to enable TabPFN to \textbf{process long contexts without any pre-processing}. We demonstrate the effectiveness of our approach on the standard TabArena benchmark.