GRADIEND: Feature Learning within Neural Networks Exemplified through Biases

Jonathan Drechsel, Steffen Herbold

Published: 2025/2/3

Abstract

AI systems frequently exhibit and amplify social biases, leading to harmful consequences in critical areas. This study introduces a novel encoder-decoder approach that leverages model gradients to learn a feature neuron encoding societal bias information such as gender, race, and religion. We show that our method can not only identify which weights of a model need to be changed to modify a feature, but even demonstrate that this can be used to rewrite models to debias them while maintaining other capabilities. We demonstrate the effectiveness of our approach across various model architectures and highlight its potential for broader applications.

GRADIEND: Feature Learning within Neural Networks Exemplified through Biases | SummarXiv | SummarXiv