How Does A Text Preprocessing Pipeline Affect Ontology Matching?

Zhangcheng Qiang, Kerry Taylor, Weiqing Wang

公開日: 2024/11/6

Abstract

The classical text preprocessing pipeline, comprising Tokenisation, Normalisation, Stop Words Removal, and Stemming/Lemmatisation, has been implemented in many systems for ontology matching (OM). However, the lack of standardisation in text preprocessing creates diversity in the mapping results. In this paper, we investigate the effect of the text preprocessing pipeline on 8 Ontology Alignment Evaluation Initiative (OAEI) tracks with 49 distinct alignments. We find that Tokenisation and Normalisation (categorised as Phase 1 text preprocessing) are more effective than Stop Words Removal and Stemming/Lemmatisation (categorised as Phase 2 text preprocessing). We propose two novel approaches to repair unwanted false mappings that occur in Phase 2 text preprocessing. One is an ad hoc logic-based repair approach that employs an ontology-specific check to find common words that cause false mappings. These words are stored in a reserved word set and applied before text preprocessing. By leveraging the power of large language models (LLMs), the post hoc LLM-based repair approach utilises the strong background knowledge provided by LLMs to repair non-existent and counter-intuitive false mappings after the text preprocessing. The experimental results indicate that these two approaches can significantly improve the matching correctness and the overall matching performance.

How Does A Text Preprocessing Pipeline Affect Ontology Matching? | SummarXiv | SummarXiv