Abstract: In this paper we consider Thompson Sampling for combinatorial semi-bandits.
We demonstrate that, perhaps surprisingly, Thompson Sampling is sub-optimal for
this problem in the sense that its regret scales exponentially in the ambient
dimension, and its minimax regret scales almost linearly. This phenomenon
occurs under a wide variety of assumptions including both non-linear and linear
reward functions. We also show that including a fixed amount of forced
exploration to Thompson Sampling does not alleviate the problem. We complement
our theoretical results with numerical results and show that in practice
Thompson Sampling indeed can perform very poorly in high dimensions.