Which Words Matter Most in Zero-Shot Prompts?

Nikta Gohari Sadr, Sangmitra Madhusudan, Hassan Sajjad, Ali Emami

公開日: 2025/2/5

Abstract

While zero-shot instructional prompts like "Let's think step-by-step" have revolutionized Large Language Model performance, a fundamental question remains unanswered: which specific words drive their remarkable effectiveness? We introduce the ZIP score (Zero-shot Importance of Perturbation), the first systematic method to quantify individual word importance in instructional prompts through controlled perturbations including synonym replacement, co-hyponym substitution, and strategic removal. Our analysis across four flagship models, seven widely-adopted prompts, and multiple task domains reveals four key findings: (1) Task-specific word hierarchies exist where mathematical problems prioritize "step-by-step" while reasoning tasks favor "think"; (2) Proprietary models show superior alignment with human intuitions compared to open-source alternatives; (3) Nouns dominate importance rankings, consistently representing the majority of significant words; and (4) Word importance inversely correlates with model performance, indicating prompts have greatest impact where models struggle most. Beyond revealing these patterns, we establish the first ground-truth benchmark for prompt interpretability through 20 validation prompts with predetermined key words, where ZIP achieves 90% accuracy versus LIME's 60%. Our findings advance prompt science, the study of how language shapes model behavior, providing both practical insights for prompt engineering and theoretical understanding of word-level effects in LLMs.

Which Words Matter Most in Zero-Shot Prompts? | SummarXiv | SummarXiv