Learning Multidimensional Urban Poverty Representation with Satellite Imagery

Sungwon Park, Sumin Lee, Jihee Kim, Jae-Gil Lee, Meeyoung Cha, Jeasurk Yang, Donghyun Ahn

Published: 2025/9/5

Abstract

Recent advances in deep learning have enabled the inference of urban socioeconomic characteristics from satellite imagery. However, models relying solely on urbanization traits often show weak correlations with poverty indicators, as unplanned urban growth can obscure economic disparities and spatial inequalities. To address this limitation, we introduce a novel representation learning framework that captures multidimensional deprivation-related traits from very high-resolution satellite imagery for precise urban poverty mapping. Our approach integrates three complementary traits: (1) accessibility traits, learned via contrastive learning to encode proximity to essential infrastructure; (2) morphological traits, derived from building footprints to reflect housing conditions in informal settlements; and (3) economic traits, inferred from nightlight intensity as a proxy for economic activity. To mitigate spurious correlations - such as those from non-residential nightlight sources that misrepresent poverty conditions - we incorporate a backdoor adjustment mechanism that leverages morphological traits during training of the economic module. By fusing these complementary features into a unified representation, our framework captures the complex nature of poverty, which often diverges from economic development trends. Evaluations across three capital cities - Cape Town, Dhaka, and Phnom Penh - show that our model significantly outperforms existing baselines, offering a robust tool for poverty mapping and policy support in data-scarce regions.

Learning Multidimensional Urban Poverty Representation with Satellite Imagery | SummarXiv | SummarXiv