Jacobi Prior: An Alternative Bayesian Method for Supervised Learning
Sourish Das, Shouvik Sardar
Published: 2024/4/17
Abstract
The Jacobi prior offers an alternative Bayesian framework for predictive modelling, designed to achieve superior computational efficiency without compromising predictive performance. This scalable method is suitable for image classification and other computationally intensive tasks. Compared to widely used methods such as Lasso, Ridge, Elastic Net, uniLasso, the MCMC-based Horseshoe prior, and non-Bayesian machine learning methods including Support Vector Machines (SVM), Random Forests, and Extreme Gradient Boosting (XGBoost), the Jacobi prior achieves competitive or better accuracy with significantly reduced computational cost. The method is well suited to distributed computing environments, as it naturally accommodates partitioned data across multiple servers. We propose a parallelisable Monte Carlo algorithm to quantify the uncertainty in the estimated coefficients. We establish the theoretical foundations of the Jacobi estimator by studying its asymptotic properties. In particular, we prove a Bernstein--von Mises theorem for the Jacobi posterior. To demonstrate its practical utility, we conduct a comprehensive simulation study comprising seven experiments focused on statistical consistency, prediction accuracy, scalability, sensitivity analysis and robustness study. In the spine classification task, we extract last-layer features from a fine-tuned ResNet-50 model and evaluate multiple classifiers, including Jacobi-Multinomial logit regression, SVM, and Random Forest. The Jacobi prior achieves state-of-the-art results in recall and predictive stability, especially when paired with domain-specific features. This highlights its potential for scalable, high-dimensional learning in medical image analysis. All code and datasets used in this paper are available at: https://github.com/sourish-cmi/Jacobi-Prior/