Image Inpainting based on Visual-Neural-Inspired Specific Object-of-Interest Imaging Technology
Yonghao Wu, Chang Liu, Vladimir Filaretov, Dmitry Yukhimets
Published: 2025/8/18
Abstract
Conventional image inpainting methods operate on whole images. Drawing on cortical processing principles this study addresses the bottlenecks of conventional holistic image inpainting methods--susceptibility to informational redundancy and low computational efficiency under occlusions and complex backgrounds, by proposing a novel framework: "Specific Object-of-Interest Imaging". This stage extracts and encodes object-level representations from complex scenes, generating semantic and structural priors that can be seamlessly integrated into any inpainting framework. On our Teapot object dataset, Elephant object dataset, Giraffe object dataset and Zebra object dataset experimental validation demonstrates--compared with the repair model without using this method, the repair model using this method has more advantages, across metrics including SSIM, PSNR, MAE, and LPIPS, while maintaining robustness in extreme scenarios (low illumination, high noise, multi-object occlusion, motion blur). Theoretical analysis integrated with cognitive neuroscience perspectives reveals profound correlations between the "object precedence perception" mechanism and dynamic feature modulation in visual cortices (V1--V4). We demonstrate that incorporating Stage I significantly enhances object consistency and semantic coherence across various mainstream inpainting models. This work not only achieves efficient and precise target-centric imaging but also pioneers interdisciplinary pathways bridging brain-inspired computational frameworks with advanced image inpainting techniques.