Safe Learning Under Irreversible Dynamics via Asking for Help

Benjamin Plaut, Juan Liévano-Karim, Hanlin Zhu, Stuart Russell

公開日: 2025/2/19

Abstract

Most learning algorithms with formal regret guarantees essentially rely on trying all possible behaviors, which is problematic when some errors cannot be recovered from. Instead, we allow the learning agent to ask for help from a mentor and to transfer knowledge between similar states. We show that this combination enables the agent to learn both safely and effectively. Under standard online learning assumptions, we provide an algorithm whose regret and number of mentor queries are both sublinear in the time horizon for any Markov Decision Process (MDP), including MDPs with irreversible dynamics. Our proof involves a sequence of three reductions which may be of independent interest. Conceptually, our result may be the first formal proof that it is possible for an agent to obtain high reward while becoming self-sufficient in an unknown, unbounded, and high-stakes environment without resets.

Safe Learning Under Irreversible Dynamics via Asking for Help | SummarXiv | SummarXiv