LeakageDetector 2.0: Analyzing Data Leakage in Jupyter-Driven Machine Learning Pipelines
Owen Truong, Terrence Zhang, Arnav Marchareddy, Ryan Lee, Jeffery Busold, Michael Socas, Eman Abdullah AlOmar
公開日: 2025/9/19
Abstract
In software development environments, code quality is crucial. This study aims to assist Machine Learning (ML) engineers in enhancing their code by identifying and correcting Data Leakage issues within their models. Data Leakage occurs when information from the test dataset is inadvertently included in the training data when preparing a data science model, resulting in misleading performance evaluations. ML developers must carefully separate their data into training, evaluation, and test sets to avoid introducing Data Leakage into their code. In this paper, we develop a new Visual Studio Code (VS Code) extension, called LeakageDetector, that detects Data Leakage, mainly Overlap, Preprocessing and Multi-test leakage, from Jupyter Notebook files. Beyond detection, we included two correction mechanisms: a conventional approach, known as a quick fix, which manually fixes the leakage, and an LLM-driven approach that guides ML developers toward best practices for building ML pipelines.