Is RL fine-tuning harder than regression? A PDE learning approach for diffusion models

Wenlong Mou

Published: 2025/9/2

Abstract

We study the problem of learning the optimal control policy for fine-tuning a given diffusion process, using general value function approximation. We develop a new class of algorithms by solving a variational inequality problem based on the Hamilton-Jacobi-Bellman (HJB) equations. We prove sharp statistical rates for the learned value function and control policy, depending on the complexity and approximation errors of the function class. In contrast to generic reinforcement learning problems, our approach shows that fine-tuning can be achieved via supervised regression, with faster statistical rate guarantees.

Is RL fine-tuning harder than regression? A PDE learning approach for diffusion models | SummarXiv | SummarXiv