A Hypothesis-First Framework for Mechanistic Model Evaluation and Selection in Neuroimaging
Dominic Boutet, Sylvain Baillet
Published: 2025/9/19
Abstract
Neuroimaging provides rich measurements of brain structure and neural activity, but turning these data into mechanistic insight remains difficult. Statistical models quantify associations without much considerations for how they arise, whereas bio-realistic models directly embody candidate mechanisms but remain hard to deploy rigorously without specialized training. We present a framework that recasts modeling choices as testable mechanistic hypotheses and supplies a simple protocol for rejecting inappropriate model specifications, such as under-/over-parameterization or invalid simplifying assumptions, based on predefined criteria before any parameter inference. The key idea is expected model behavior under feature generalization constraints: instead of judging a model solely by how well it fits a specific target feature of interest Y at an optimal parameter set, we evaluate the model's expected Y output when the model is constrained to reproduce a broader, or distinct, feature Z over the entire parameter space. We then assess whether a mirror statistical model, derived from the model's expected Y outputs, to the empirical statistical model using standard statistics. In synthetic experiments with known ground truth (Wilson Cowan dynamics), the framework correctly rejects mis-specified hypotheses, penalizes unnecessary degrees of freedom, and preserves valid specifications. This provides a practical, hypothesis-first route to using mechanistic models for neuroimaging without requiring expert-level methodology.