Knowledge-Augmented Relation Learning for Complementary Recommendation with Large Language Models

Chihiro Yamasaki, Kai Sugahara, Kazushi Okamoto

公開日: 2025/9/6

Abstract

Complementary recommendations play a crucial role in e-commerce by enhancing user experience through suggestions of compatible items. Accurate classification of complementary item relationships requires reliable labels, but their creation presents a dilemma. Behavior-based labels are widely used because they can be easily generated from interaction logs; however, they often contain significant noise and lack reliability. While function-based labels (FBLs) provide high-quality definitions of complementary relationships by carefully articulating them based on item functions, their reliance on costly manual annotation severely limits a model's ability to generalize to diverse items. To resolve this trade-off, we propose Knowledge-Augmented Relation Learning (KARL), a framework that strategically fuses active learning with large language models (LLMs). KARL efficiently expands a high-quality FBL dataset at a low cost by selectively sampling data points that the classifier finds the most difficult and uses the label extension of the LLM. Our experiments showed that in out-of-distribution (OOD) settings, an unexplored item feature space, KARL improved the baseline accuracy by up to 37%. In contrast, in in-distribution (ID) settings, the learned item feature space, the improvement was less than 0.5%, with prolonged learning could degrade accuracy. These contrasting results are due to the data diversity driven by KARL's knowledge expansion, suggesting the need for a dynamic sampling strategy that adjusts diversity based on the prediction context (ID or OOD).