A Convolution and Attention Based Encoder for Reinforcement Learning under Partial Observability

Wuhao Wang, Zhiyong Chen

公開日: 2025/5/29

Abstract

Partially Observable Markov Decision Processes (POMDPs) remain a core challenge in reinforcement learning due to incomplete state information. We address this by reformulating POMDPs as fully observable processes with fixed-length observation histories as augmented states. To efficiently encode these histories, we propose a lightweight temporal encoder based on depthwise separable convolution and self-attention, avoiding the overhead of recurrent and Transformer-based models. Integrated into an actor-critic framework, our method achieves superior performance on continuous control benchmarks under partial observability. More broadly, this work shows that lightweight temporal encoding can improve the scalability of AI systems under uncertainty. It advances the development of agents capable of reasoning robustly in real-world environments where information is incomplete or delayed.

A Convolution and Attention Based Encoder for Reinforcement Learning under Partial Observability | SummarXiv | SummarXiv