Whose Journey Matters? Investigating Identity Biases in Large Language Models (LLMs) for Travel Planning Assistance

Ruiping Ren, Xing Yao, Shu Cole, Haining Wang

公開日: 2024/10/22

Abstract

As large language models (LLMs) become increasingly integral to the hospitality and tourism industry, concerns about their fairness in serving diverse identity groups persist. Grounded in social identity theory and sociotechnical systems theory, this study examines ethnic and gender biases in travel recommendations generated by LLMs. Using fairness probing, we analyze outputs from three leading open-source LLMs. The results show that test accuracy for both ethnicity and gender classifiers exceed random chance. Analysis of the most influential features reveals the presence of stereotype bias in LLM-generated recommendations. We also found hallucinations among these features, occurring more frequently in recommendations for minority groups. These findings indicate that LLMs exhibit ethnic and gender bias when functioning as travel planning assistants. This study underscores the need for bias mitigation strategies to improve the inclusivity and reliability of generative AI-driven travel planning assistance.

Whose Journey Matters? Investigating Identity Biases in Large Language Models (LLMs) for Travel Planning Assistance | SummarXiv | SummarXiv