Variance-Bounded Evaluation without Ground Truth: VB-Score
Kaihua Ding
Published: 2025/9/26
Abstract
Reliable evaluation is a central challenge in machine learning when tasks lack ground truth labels or involve ambiguity and noise. Conventional frameworks, rooted in the Cranfield paradigm and label-based metrics, fail in such cases because they cannot assess how robustly a system performs under uncertain interpretations. We introduce VB-Score, a variance-bounded evaluation framework that measures both effectiveness and robustness without requiring ground truth. Given a query or input, VB-Score enumerates plausible interpretations, assigns probabilities, and evaluates output by expected success penalized by variance, rewarding consistent performance across intents. We provide a formal analysis of VB-Score, establishing range, monotonicity, and stability properties, and relate it to risk-sensitive measures such as mean-variance utility. Experiments on ambiguous queries and entity-centric retrieval tasks show that VB-Score surfaces robustness differences hidden by conventional metrics. By enabling reproducible, label-free evaluation, VB-Score offers a principled foundation for benchmarking machine learning systems in ambiguous or label-scarce domains.