Communication Bias in Large Language Models: A Regulatory Perspective

Adrian Kuenzler, Stefan Schmid

Published: 2025/9/25

Abstract

Large language models (LLMs) are increasingly central to many applications, raising concerns about bias, fairness, and regulatory compliance. This paper reviews risks of biased outputs and their societal impact, focusing on frameworks like the EU's AI Act and the Digital Services Act. We argue that beyond constant regulation, stronger attention to competition and design governance is needed to ensure fair, trustworthy AI. This is a preprint of the Communications of the ACM article of the same title.

Communication Bias in Large Language Models: A Regulatory Perspective | SummarXiv | SummarXiv