Latent class multivariate probit and latent trait models for evaluating test accuracy without a gold standard: A simulation study
Enzo Cerullo, Sean Pinkney, Alex J. Sutton, Tim Lucas, Nicola J. Cooper, Hayley E. Jones
Published: 2025/9/23
Abstract
In the context of an imperfect gold standard, latent class modelling can be used to estimate accuracy of multiple medical tests. However, the conditional independence (CI) assumption is rarely thought to be clinically valid. Two models accommodating conditional dependence are the latent class multivariate probit (LC-MVP) and latent trait models. Despite LC-MVP's greater flexibility - modelling full correlation matrices versus the latent trait's restricted structure - the latent trait has been more widely used. No simulation studies have directly compared these two models. We conducted a comprehensive simulation study comparing both models across five data generating mechanisms: CI, low-heterogeneity (latent trait-generated), and high-heterogeneity (LC-MVP-generated) correlation structures. We evaluated multiple priors, including novel constrained correlation priors using Pinkney's method that preserves prior interpretability. Models were fit using our BayesMVP R package, which achieves GPU-like speed-ups on these inherently serial models. The LC-MVP model demonstrated superior overall performance. Whilst the latent trait model performed acceptably on its own generated data, it failed for high-heterogeneity structures, sometimes performing worse than the CI model. The CI model did badly for most dependent structures. We also found ceiling effects: high sensitivities reduced the importance of correlation recovery, explaining paradoxes where models achieved good performance despite poor correlation recovery. Our results strongly favour LC-MVP for practical applications. The latent trait model's severe consequences under realistic correlation structures make it a more risky choice. However, LC-MVP with custom correlation constraints and priors provides a safer, more flexible framework for test accuracy evaluation without a perfect gold standard.