How Model Size, Temperature, and Prompt Style Affect LLM-Human Assessment Score Alignment

Julie Jung, Max Lu, Sina Chole Benker, Dogus Darici

公開日: 2025/9/14

Abstract

We examined how model size, temperature, and prompt style affect Large Language Models' (LLMs) alignment within itself, between models, and with human in assessing clinical reasoning skills. Model size emerged as a key factor in LLM-human score alignment. Study highlights the importance of checking alignments across multiple levels.