Learning the Influence Graph of a Markov Process that Randomly Resets to Past

Sudharsan Senthil, Avhishek Chatterjee

Published: 2025/9/19

Abstract

Learning the influence graph G of a high-dimensional Markov process is a challenging problem. Prior work has addressed this task when the process has finite memory. However, the more general regime in which the system probabilistically "jumps back in time" - so that the state at t+1 depends on a sample from a distant past t-d - remains unexplored. The process with probabilistic resets can be modeled as a Markov process with memory, but estimations become computationally expensive. To tackle this, we introduce PIMRecGreedy, a modification of the RecGreedy algorithm originally designed for i.i.d. samples. The proposed method does not assume memory, requires no prior knowledge of d, and recovers G with high probability even without access to the specific time indices at which such temporal jumps occur, and without imposing any constraints on the graph structures.

Learning the Influence Graph of a Markov Process that Randomly Resets to Past | SummarXiv | SummarXiv