Feedback promotes efficient-coding while attenuating bias in recurrent neural networks
Holly Kular, Robert Kim, John Serences, Nuttida Rungratsameetaweemana
公開日: 2025/9/27
Abstract
Studies of human decision-making demonstrate that environmental regularities, such as natural image statistics or intentionally nonuniform stimulus probabilities, can be exploited to improve efficiency (termed `efficient-coding'). Conversely, from a machine learning perspective, such nonuniform stimulus properties can lead to biased neural networks with poor generalization performance. Understanding how the brain flexibly leverages stimulus bias while maintaining robust generalization could lead to novel architectures that adaptively exploit environmental structure without sacrificing performance on out-of-distribution data. To address this disconnect, we investigated the impact of stimulus regularities in a 3-layer hierarchical continuous-time recurrent neural network (ctRNN) to better understand how artificial networks might exploit statistical regularities to improve efficiency while avoiding undesirable biases. We trained the model to reproduce one of six possible inputs under biased conditions (stimulus 1 more probable than stimuli 2-6) or unbiased conditions (all stimuli equally likely). Across all hidden layers, more information was encoded about high-probability stimuli, consistent with the efficient-coding framework. Importantly, reducing feedback from the final hidden layer of trained models selectively magnified representations of high-probability stimuli, at the expense of low-probability stimuli, across all layers. Together, these results suggest that models exploit nonuniform input statistics to improve efficiency, and that feedback pathways evolve to protect the processing of low-probability stimuli by regulating the impact of biased input statistics.