Off-Policy Maximum Entropy RL with Future State and Action Visitation Measures
Adrien Bolland, Gaspard Lambrechts, Damien Ernst
Published: 2024/12/9
Abstract
Maximum entropy reinforcement learning integrates exploration into policy learning by providing additional intrinsic rewards proportional to the entropy of some distribution. In this paper, we propose a novel approach in which the intrinsic reward function is the relative entropy of the discounted distribution of states and actions (or features derived from these states and actions) visited during future time steps. This approach is motivated by three results. First, this new objective is a lower bound on the negated entropy of the marginal visitation distribution of states and actions, commonly used as an alternative exploration objective. Second, a policy maximizing the expected discounted sum of intrinsic rewards also maximizes a lower bound on the state-action value function of the decision process. Third, the distribution used in the intrinsic reward definition is the fixed point of a contraction operator. Existing algorithms can therefore be adapted to learn this fixed point off-policy and compute the intrinsic rewards. We finally introduce an algorithm maximizing our new objective and show that resulting policies have good state-action space coverage and achieve high-performance control.