Interactive Semantic Segmentation for Phosphene Vision Neuroprosthetics
Eleftherios Papadopoulos, Yagmur Güçlütürk
公開日: 2025/9/24
Abstract
Visual impairments present significant challenges to individuals worldwide, impacting daily activities and quality of life. Visual neuroprosthetics offer a promising solution, leveraging advancements in technology to provide a simplified visual sense through devices comprising cameras, computers, and implanted electrodes. This study investigates user-centered design principles for a phosphene vision algorithm, utilizing feedback from visually impaired individuals to guide the development of a gaze-controlled semantic segmentation system. We conducted interviews revealing key design principles. These principles informed the implementation of a gaze-guided semantic segmentation algorithm using the Segment Anything Model (SAM). In a simulated phosphene vision environment, participants performed object detection tasks under SAM, edge detection, and normal vision conditions. SAM improved identification accuracy over edge detection, remained effective in complex scenes, and was particularly robust for specific object shapes. These findings demonstrate the value of user feedback and the potential of gaze-guided semantic segmentation to enhance neuroprosthetic vision.