Deep Learning without Weight Symmetry
Li Ji-An, Marcus K. Benna
Published: 2024/5/31
Abstract
Backpropagation, a foundational algorithm for training artificial neural networks, predominates in contemporary deep learning. Although highly successful, it is widely considered biologically implausible, because it relies on precise symmetry between feedforward and feedback weights to accurately propagate gradient signals that assign credit. The so-called weight transport problem concerns how biological brains learn to align feedforward and feedback paths while avoiding the non-biological transport of feedforward weights into feedback weights. To address this, several credit assignment algorithms, such as feedback alignment and the Kollen-Pollack rule, have been proposed. While they can achieve the desired weight alignment, these algorithms imply that if a neuron sends a feedforward synapse to another neuron, it should also receive an identical or at least partially correlated feedback synapse from the latter neuron, thereby forming a bidirectional connection. However, this idealized connectivity pattern contradicts experimental observations in the brain, a discrepancy we refer to as the weight symmetry problem. To address this challenge posed by considering biological constraints on connectivity, we introduce the Product Feedback Alignment (PFA) algorithm. We demonstrate that PFA can eliminate explicit weight symmetry entirely while closely approximating backpropagation and achieving comparable performance in deep convolutional networks. Our results offer a novel approach to solve the longstanding problem of credit assignment in the brain, leading to more biologically plausible learning in deep networks compared to previous methods.