UnifiedRL: A Reinforcement Learning Algorithm Tailored for Multi-Task Fusion in Large-Scale Recommender Systems

Peng Liu, Cong Xu, Ming Zhao, Jiawei Zhu, Bin Wang, Yi Ren

Published: 2024/4/19

Abstract

As the last pivotal stage of Recommender System (RS), Multi-Task Fusion (MTF) is responsible for combining multiple scores outputted by Multi-Task Learning (MTL) model into a final score to maximize user satisfaction. Recently, to optimize long-term user satisfaction, Reinforcement Learning (RL) is used for MTF in RSs. However, the existing offline RL algorithms used for MTF have the following severe problems: a) To avoid Out-of-Distribution (OOD), their constraints are overly strict, which seriously damage performance; b) They are unaware of the exploration policy used to collect training data, only suboptimal policy can be learned; c) Their exploration policies are inefficient and hurt user experience. To solve the above problems, we propose an innovative method called UnifiedRL tailored for MTF in large-scale RSs. UnifiedRL seamlessly integrates offline RL model with its custom exploration policy to relax overly strict constraints, which is different from existing RL-MTF methods and significantly improves performance. In addition, compared to existing exploration policies, UnifiedRL's custom exploration policy is highly efficient, enabling frequent online exploration and offline training iterations, which further improves performance. Extensive offline and online experiments are conducted in a large-scale RS. The results demonstrate that UnifiedRL outperforms other existing MTF methods remarkably, achieving a +4.64% increase in user valid consumption and a +1.74% increase in user duration time. To the best of our knowledge, UnifiedRL is the first RL algorithm tailored for MTF in RSs and has been successfully deployed in multiple large-scale RSs since June 2023, yielding significant benefits.