humancompatible.detect: a Python Toolkit for Detecting Bias in AI Models

German M. Matilla, Jiri Nemecek, Illia Kryvoviaz, Jakub Marecek

公開日: 2025/9/29

Abstract

There is a strong recent emphasis on trustworthy AI. In particular, international regulations, such as the AI Act, demand that AI practitioners measure data quality on the input and estimate bias on the output of high-risk AI systems. However, there are many challenges involved, including scalability (MMD) and computability (Wasserstein-1) issues of traditional methods for estimating distances on measure spaces. Here, we present humancompatible.detect, a toolkit for bias detection that addresses these challenges. It incorporates two newly developed methods to detect and evaluate bias: maximum subgroup discrepancy (MSD) and subsampled $\ell_\infty$ distances. It has an easy-to-use API documented with multiple examples. humancompatible.detect is licensed under the Apache License, Version 2.0.

humancompatible.detect: a Python Toolkit for Detecting Bias in AI Models | SummarXiv | SummarXiv