Training-Free Diffusion Framework for Stylized Image Generation with Identity Preservation
Mohammad Ali Rezaei, Helia Hajikazem, Saeed Khanehgir, Mahdi Javanmardi
公開日: 2025/6/7
Abstract
Although diffusion models have demonstrated remarkable generative capabilities, existing style transfer techniques often struggle to maintain identity while achieving high-quality stylization. This limitation becomes particularly critical in practical applications such as advertising and marketing, where preserving the identity of featured individuals is essential for a campaign's effectiveness. It is particularly severe when subjects are distant from the camera or appear within a group, frequently leading to a significant loss of identity. To address this issue, we introduce a novel, training-free framework for identity-preserved stylized image synthesis. Key contributions include the "Mosaic Restored Content Image" technique, which significantly enhances identity retention in complex scenes, and a training-free content consistency loss that improves the preservation of fine-grained details by directing more attention to the original image during stylization. Our experiments reveal that the proposed approach substantially exceeds the baseline model in concurrently maintaining high stylistic fidelity and robust identity integrity, all without necessitating model retraining or fine-tuning.