Uncovering Implicit Bias in Large Language Models with Concept Learning Dataset
Leroy Z. Wang
公開日: 2025/9/21
Abstract
We introduce a dataset of concept learning tasks that helps uncover implicit biases in large language models. Using in-context concept learning experiments, we found that language models may have a bias toward upward monotonicity in quantifiers; such bias is less apparent when the model is tested by direct prompting without concept learning components. This demonstrates that in-context concept learning can be an effective way to discover hidden biases in language models.