From MNIST to ImageNet: Understanding the Scalability Boundaries of Differentiable Logic Gate Networks

Sven Brändle, Till Aczel, Andreas Plesner, Roger Wattenhofer

公開日: 2025/9/30

Abstract

Differentiable Logic Gate Networks (DLGNs) are a very fast and energy-efficient alternative to conventional feed-forward networks. With learnable combinations of logical gates, DLGNs enable fast inference by hardware-friendly execution. Since the concept of DLGNs has only recently gained attention, these networks are still in their developmental infancy, including the design and scalability of their output layer. To date, this architecture has primarily been tested on datasets with up to ten classes. This work examines the behavior of DLGNs on large multi-class datasets. We investigate its general expressiveness, its scalability, and evaluate alternative output strategies. Using both synthetic and real-world datasets, we provide key insights into the importance of temperature tuning and its impact on output layer performance. We evaluate conditions under which the Group-Sum layer performs well and how it can be applied to large-scale classification of up to 2000 classes.

From MNIST to ImageNet: Understanding the Scalability Boundaries of Differentiable Logic Gate Networks | SummarXiv | SummarXiv