Analysis of Bias in Deep Learning Facial Beauty Regressors

Chandon Hamel, Mike Busch

Published: 2025/9/29

Abstract

Bias can be introduced to AI systems even from seemingly balanced sources, and AI facial beauty prediction is subject to ethnicity-based bias. This work sounds warnings about AI's role in shaping aesthetic norms while providing potential pathways toward equitable beauty technologies through comparative analysis of models trained on SCUT-FBP5500 and MEBeauty datasets. Employing rigorous statistical validation (Kruskal-Wallis H-tests, post hoc Dunn analyses). It is demonstrated that both models exhibit significant prediction disparities across ethnic groups $(p < 0.001)$, even when evaluated on the balanced FairFace dataset. Cross-dataset validation shows algorithmic amplification of societal beauty biases rather than mitigation based on prediction and error parity. The findings underscore the inadequacy of current AI beauty prediction approaches, with only 4.8-9.5\% of inter-group comparisons satisfying distributional parity criteria. Mitigation strategies are proposed and discussed in detail.

Analysis of Bias in Deep Learning Facial Beauty Regressors | SummarXiv | SummarXiv