Memory Management and Contextual Consistency for Long-Running Low-Code Agents

Jiexi Xu

Published: 2025/9/27

Abstract

The rise of AI-native Low-Code/No-Code (LCNC) platforms enables autonomous agents capable of executing complex, long-duration business processes. However, a fundamental challenge remains: memory management. As agents operate over extended periods, they face "memory inflation" and "contextual degradation" issues, leading to inconsistent behavior, error accumulation, and increased computational cost. This paper proposes a novel hybrid memory system designed specifically for LCNC agents. Inspired by cognitive science, our architecture combines episodic and semantic memory components with a proactive "Intelligent Decay" mechanism. This mechanism intelligently prunes or consolidates memories based on a composite score factoring in recency, relevance, and user-specified utility. A key innovation is a user-centric visualization interface, aligned with the LCNC paradigm, which allows non-technical users to manage the agent's memory directly, for instance, by visually tagging which facts should be retained or forgotten. Through simulated long-running task experiments, we demonstrate that our system significantly outperforms traditional approaches like sliding windows and basic RAG, yielding superior task completion rates, contextual consistency, and long-term token cost efficiency. Our findings establish a new framework for building reliable, transparent AI agents capable of effective long-term learning and adaptation.

Memory Management and Contextual Consistency for Long-Running Low-Code Agents | SummarXiv | SummarXiv