Extended UCB Policies for Multi-armed Bandit Problems

Keqin Liu, Tianshuo Zheng, Zhi-Hua Zhou

公開日: 2011/12/8

Abstract

The multi-armed bandit (MAB) problems are widely studied in fields of operations research, stochastic optimization, and reinforcement learning. In this paper, we consider the classical MAB model with heavy-tailed reward distributions and introduce the extended robust UCB policy, which is an extension of the results of Bubeck et al. [5] and Lattimore [22] that are further based on the pioneering idea of UCB policies [e.g. Auer et al. 3]. The previous UCB policies require some strict conditions on reward distributions, which can be difficult to guarantee in practical scenarios. Our extended robust UCB generalizes Lattimore's seminary work (for moments of orders $p=4$ and $q=2$) to arbitrarily chosen $p>q>1$ as long as the two moments have a known controlled relationship, while still achieving the optimal regret growth order $O(log T)$, thus providing a broadened application area of UCB policies for heavy-tailed reward distributions. Furthermore, we achieve a near-optimal regret order without any knowledge of the reward distributions as long as their $p$-th moments exist for some $p>1$. Finally, we briefly present our earlier work on light-tailed reward distributions for a complete illustration of the amazing simplicity and power of UCB policies.

Extended UCB Policies for Multi-armed Bandit Problems | SummarXiv | SummarXiv