Towards Efficient Risk-Sensitive Policy Gradient: An Iteration Complexity Analysis

Rui Liu, Anish Gupta, Erfaun Noorani, Pratap Tokekar

Published: 2024/3/13

Abstract

Reinforcement Learning (RL) has shown exceptional performance across various applications, enabling autonomous agents to learn optimal policies through interaction with their environments. However, traditional RL frameworks often face challenges in terms of iteration efficiency and safety. Risk-sensitive policy gradient methods, which incorporate both expected return and risk measures, have been explored for their ability to yield safe policies, yet their iteration complexity remains largely underexplored. In this work, we conduct a rigorous iteration complexity analysis for the risk-sensitive policy gradient method, focusing on the REINFORCE algorithm with an exponential utility function. We establish an iteration complexity of $\mathcal{O}(\epsilon^{-2})$ to reach an $\epsilon$-approximate first-order stationary point (FOSP). Furthermore, we investigate whether risk-sensitive algorithms can achieve better iteration complexity compared to their risk-neutral counterparts. Our analysis indicates that risk-sensitive REINFORCE can potentially converge faster. To validate our analysis, we empirically evaluate the learning performance and convergence efficiency of the risk-neutral and risk-sensitive REINFORCE algorithms in multiple environments: CartPole, MiniGrid, and Robot Navigation. Empirical results confirm that risk-sensitive cases can converge and stabilize faster compared to their risk-neutral counterparts. More details can be found on our website https://anonymous.4open.science/w/riskrl.

Towards Efficient Risk-Sensitive Policy Gradient: An Iteration Complexity Analysis | SummarXiv | SummarXiv