The Geometry of Nonlinear Reinforcement Learning

Nikola Milosevic, Nico Scherf

公開日: 2025/9/1

Abstract

Reward maximization, safe exploration, and intrinsic motivation are often studied as separate objectives in reinforcement learning (RL). We present a unified geometric framework, that views these goals as instances of a single optimization problem on the space of achievable long-term behavior in an environment. Within this framework, classical methods such as policy mirror descent, natural policy gradient, and trust-region algorithms naturally generalize to nonlinear utilities and convex constraints. We illustrate how this perspective captures robustness, safety, exploration, and diversity objectives, and outline open challenges at the interface of geometry and deep RL.

The Geometry of Nonlinear Reinforcement Learning | SummarXiv | SummarXiv