Explainable Deep Learning for Cataract Detection in Retinal Images: A Dual-Eye and Knowledge Distillation Approach
MohammadReza Abbaszadeh Bavil Soflaei, Karim SamadZamini
公開日: 2025/9/20
Abstract
Cataract remains a leading cause of visual impairment worldwide, and early detection from retinal imaging is critical for timely intervention. We present a deep learning pipeline for cataract classification using the Ocular Disease Recognition dataset, containing left and right fundus photographs from 5000 patients. We evaluated CNNs, transformers, lightweight architectures, and knowledge-distilled models. The top-performing model, Swin-Base Transformer, achieved 98.58% accuracy and an F1-score of 0.9836. A distilled MobileNetV3, trained with Swin-Base knowledge, reached 98.42% accuracy and a 0.9787 F1-score with greatly reduced computational cost. The proposed dual-eye Siamese variant of the distilled MobileNet, integrating information from both eyes, achieved an accuracy of 98.21%. Explainability analysis using Grad-CAM demonstrated that the CNNs concentrated on medically significant features, such as lens opacity and central blur. These results show that accurate, interpretable cataract detection is achievable even with lightweight models, supporting potential clinical integration in resource-limited settings