Detecting Model Drifts in Non-Stationary Environment Using Edit Operation Measures

Chang-Hwan Lee, Alexander Shim

Published: 2025/9/14

Abstract

Reinforcement learning (RL) agents typically assume stationary environment dynamics. Yet in real-world applications such as healthcare, robotics, and finance, transition probabilities or reward functions may evolve, leading to model drift. This paper proposes a novel framework to detect such drifts by analyzing the distributional changes in sequences of agent behavior. Specifically, we introduce a suite of edit operation-based measures to quantify deviations between state-action trajectories generated under stationary and perturbed conditions. Our experiments demonstrate that these measures can effectively distinguish drifted from non-drifted scenarios, even under varying levels of noise, providing a practical tool for drift detection in non-stationary RL environments.

Detecting Model Drifts in Non-Stationary Environment Using Edit Operation Measures | SummarXiv | SummarXiv