Knowledge graph-based personalized multimodal recommendation fusion framework

Yu Fang

公開日: 2025/9/3

Abstract

In the contemporary age characterized by information abundance, rapid advancements in artificial intelligence have rendered recommendation systems indispensable. Conventional recommendation methodologies based on collaborative filtering or individual attributes encounter deficiencies in capturing nuanced user interests. Knowledge graphs and multimodal data integration offer enhanced representations of users and items with greater richness and precision. This paper reviews existing multimodal knowledge graph recommendation frameworks, identifying shortcomings in modal interaction and higher-order dependency modeling. We propose the Cross-Graph Cross-Modal Mutual Information-Driven Unified Knowledge Graph Learning and Recommendation Framework (CrossGMMI-DUKGLR), which employs pre-trained visual-text alignment models for feature extraction, achieves fine-grained modality fusion through multi-head cross-attention, and propagates higher-order adjacency information via graph attention networks.

Knowledge graph-based personalized multimodal recommendation fusion framework | SummarXiv | SummarXiv