IBiT: Utilizing Inductive Biases to Create a More Data Efficient Attention Mechanism

Adithya Giri

Published: 2025/9/24

Abstract

In recent years, Transformer-based architectures have become the dominant method for Computer Vision applications. While Transformers are explainable and scale well with dataset size, they lack the inductive biases of Convolutional Neural Networks. While these biases may be learned on large datasets, we show that introducing these inductive biases through learned masks allow Vision Transformers to learn on much smaller datasets without Knowledge Distillation. These Transformers, which we call Inductively Biased Image Transformers (IBiT), are significantly more accurate on small datasets, while retaining the explainability Transformers.

IBiT: Utilizing Inductive Biases to Create a More Data Efficient Attention Mechanism | SummarXiv | SummarXiv