Proactive AI Adoption can be Threatening: When Help Backfires
Dana Harari, Ofra Amir
公開日: 2025/9/11
Abstract
Artificial intelligence (AI) assistants are increasingly embedded in workplace tools, raising the question of how initiative-taking shapes adoption. Prior work highlights trust and expectation mismatches as barriers, but the underlying psychological mechanisms remain unclear. Drawing on self-affirmation and social exchange theories, we theorize that unsolicited help elicits self-threat, reducing willingness to accept assistance, likelihood of future use, and performance expectancy. We report two vignette-based experiments (Study~1: $N=761$; Study~2: $N=571$, preregistered). Study~1 compared anticipatory and reactive help provided by an AI vs. a human, while Study~2 distinguished between \emph{offering} (suggesting help) and \emph{providing} (acting automatically). In Study 1, AI help was more threatening than human help. Across both studies, anticipatory help increased perceived threat and reduced adoption outcomes. Our findings identify self-threat as a mechanism explaining why proactive AI features may backfire and suggest design implications for AI initiative.