On the interplay between prior weight and variance of the robustification component in Robust Mixture Prior Bayesian Dynamic Borrowing approach
Marco Ratta, Gaelle Saint-Hilary, Mauro Gasparini, Pavel Mozgunov
Published: 2025/9/1
Abstract
Robust Mixture Prior (RMP) is a popular Bayesian dynamic borrowing method, which combines an informative historical distribution with a less informative component (referred as robustification component) in a mixture prior to enhance the efficiency of hybrid-control randomized trials. Current practice typically focuses solely on the selection of the prior weight that governs the relative influence of these two components, often fixing the variance of the robustification component to that of a single observation. In this study we demonstrate that the performance of RMPs critically depends on the joint selection of both weight and variance of the robustification component. In particular, we show that a wide range of weight-variance pairs can yield practically identical posterior inferences (in particular regions of the parameter space) and that large variance robust components may be employed without incurring in the so called Lindley's paradox. We further show that the use of large variance robustification components leads to improved asymptotic Type I error control and enhanced robustness of the RMP to the specification of the location parameter of the robustification component. Finally, we leverage these theoretical results to propose a novel and practical hyper-parameter elicitation routine.