KDC-Diff: A Latent-Aware Diffusion Model with Knowledge Retention for Memory-Efficient Image Generation

Md. Naimur Asif Borno, Md Sakib Hossain Shovon, Asmaa Soliman Al-Moisheer, Mohammad Ali Moni

公開日: 2025/5/11

Abstract

The growing adoption of generative AI in real-world applications has exposed a critical bottleneck in the computational demands of diffusion-based text-to-image models. In this work, we propose KDC-Diff, a novel and scalable generative framework designed to significantly reduce computational overhead while maintaining high performance. At its core, KDC-Diff designs a structurally streamlined U-Net with a dual-layered knowledge distillation strategy to transfer semantic and structural representations from a larger teacher model. Moreover, a latent-space replay-based continual learning mechanism is incorporated to ensure stable generative performance across sequential tasks. Evaluated on benchmark datasets, our model demonstrates strong performance across FID, CLIP, KID, and LPIPS metrics while achieving substantial reductions in parameter count, inference time, and FLOPs. KDC-Diff offers a practical, lightweight, and generalizable solution for deploying diffusion models in low-resource environments, making it well-suited for the next generation of intelligent and resource-aware computing systems.

KDC-Diff: A Latent-Aware Diffusion Model with Knowledge Retention for Memory-Efficient Image Generation | SummarXiv | SummarXiv