The Impact of Role Design in In-Context Learning for Large Language Models

Hamidreza Rouzegar, Masoud Makrehchi

公開日: 2025/9/27

Abstract

In-context learning (ICL) enables Large Language Models (LLMs) to generate predictions based on prompts without additional fine-tuning. While prompt engineering has been widely studied, the impact of role design within prompts remains underexplored. This study examines the influence of role configurations in zero-shot and few-shot learning scenarios using GPT-3.5 and GPT-4o from OpenAI and Llama2-7b and Llama2-13b from Meta. We evaluate the models' performance across datasets, focusing on tasks like sentiment analysis, text classification, question answering, and math reasoning. Our findings suggest the potential of role-based prompt structuring to enhance LLM performance.

The Impact of Role Design in In-Context Learning for Large Language Models | SummarXiv | SummarXiv