Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents

Hilda Hadan, Reza Hadi Mogavi, Leah Zhang-Kennedy, Lennart E. Nacke

公開日: 2025/3/28

Abstract

The rapid growth of artificial intelligence (AI) technologies has raised major privacy and ethical concerns. However, existing AI incident taxonomies and guidelines lack grounding in real-world cases, limiting their effectiveness for prevention and mitigation. We analyzed 202 real-world AI privacy and ethical incidents to develop a taxonomy that classifies them across AI lifecycle stages and captures contributing factors, including causes, responsible entities, sources of disclosure, and impacts. Our findings reveal widespread harms from poor organizational decisions and legal non-compliance, limited corrective interventions, and rare reporting from AI developers and adopting entities. Our taxonomy offers a structured approach for systematic incident reporting and emphasizes the weaknesses of current AI governance frameworks. Our findings provide actionable guidance for policymakers and practitioners to strengthen user protections, develop targeted AI policies, enhance reporting practices, and foster responsible AI governance and innovation, especially in contexts such as social media and child protection.