Designing AI-Agents with Personalities: A Psychometric Approach
Muhua Huang, Xijuan Zhang, Christopher Soto, James Evans
公開日: 2024/10/25
Abstract
We introduce a methodology for assigning quantifiable and psychometrically validated personalities to AI-Agents using the Big Five framework. Across three studies, we evaluate its feasibility and limits. In Study 1, we show that large language models (LLMs) capture semantic similarities among Big Five measures, providing a basis for personality assignment. In Study 2, we create AI-Agents using prompts designed based on the Big Five Inventory (BFI-2) in the Likert or Expanded format, and find that, when paired with newer LLMs (e.g., GPT-4, GPT-4o, Llama, DeepSeek), these AI-Agents align more closely with human responses on the Mini-Markers test than those generated with binary adjective prompts or older models, although the finer pattern of results (e.g., factor loading patterns) were not consistent between AI-Agents and human participants. In Study 3, we validate our AI-Agents with risk-taking and moral dilemma vignettes. We find that while fine-tuning shifts responses toward more moral judgment, AI-Agent correlations between the input Big Five traits and the output moral judgments mirror those from human participants. Overall, our results show that AI-Agents align with humans in correlations between input Big Five traits and output responses and may serve as useful tools for preliminary research. Nevertheless, discrepancies in finer response patterns indicate that AI-Agents cannot (yet) fully substitute for human participants in precision or high-stakes projects.