Understanding the Effects of Miscalibrated AI Confidence on User Trust, Reliance, and Decision Efficacy

Jingshu Li, Yitian Yang, Renwen Zhang, Q. Vera Liao, Tianqi Song, Zhengtao Xu, Yi-chieh Lee

Published: 2024/2/12

Abstract

Providing well-calibrated AI confidence can help promote users' appropriate trust in and reliance on AI, which are essential for AI-assisted decision-making. However, calibrating AI confidence -- providing confidence score that accurately reflects the true likelihood of AI being correct -- is known to be challenging. To understand the effects of AI confidence miscalibration, we conducted our first experiment. The results indicate that miscalibrated AI confidence impairs users' appropriate reliance and reduces AI-assisted decision-making efficacy, and AI miscalibration is difficult for users to detect. Then, in our second experiment, we examined whether communicating AI confidence calibration levels could mitigate the above issues. We find that it helps users to detect AI miscalibration. Nevertheless, since such communication decreases users' trust in uncalibrated AI, leading to high under-reliance, it does not improve the decision efficacy. We discuss design implications based on these findings and future directions to address risks and ethical concerns associated with AI miscalibration.

Understanding the Effects of Miscalibrated AI Confidence on User Trust, Reliance, and Decision Efficacy | SummarXiv | SummarXiv