Neural Exploitation and Exploration of Contextual Bandits
- URL: http://arxiv.org/abs/2305.03784v1
- Date: Fri, 5 May 2023 18:34:49 GMT
- Title: Neural Exploitation and Exploration of Contextual Bandits
- Authors: Yikun Ban, Yuchen Yan, Arindam Banerjee, Jingrui He
- Abstract summary: We study utilizing neural networks for the exploitation and exploration of contextual multi-armed bandits.
EE-Net is a novel neural-based exploitation and exploration strategy.
We show that EE-Net outperforms related linear and neural contextual bandit baselines on real-world datasets.
- Score: 51.25537742455235
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we study utilizing neural networks for the exploitation and
exploration of contextual multi-armed bandits. Contextual multi-armed bandits
have been studied for decades with various applications. To solve the
exploitation-exploration trade-off in bandits, there are three main techniques:
epsilon-greedy, Thompson Sampling (TS), and Upper Confidence Bound (UCB). In
recent literature, a series of neural bandit algorithms have been proposed to
adapt to the non-linear reward function, combined with TS or UCB strategies for
exploration. In this paper, instead of calculating a large-deviation based
statistical bound for exploration like previous methods, we propose,
``EE-Net,'' a novel neural-based exploitation and exploration strategy. In
addition to using a neural network (Exploitation network) to learn the reward
function, EE-Net uses another neural network (Exploration network) to
adaptively learn the potential gains compared to the currently estimated reward
for exploration. We provide an instance-based
$\widetilde{\mathcal{O}}(\sqrt{T})$ regret upper bound for EE-Net and show that
EE-Net outperforms related linear and neural contextual bandit baselines on
real-world datasets.
Related papers
- Federated Neural Bandit [46.64825970508973]
In this paper, we introduce the federated neural-upper confidence bound (FN-UCB) algorithm for neural contextual bandit.
To better exploit the federated setting, we adopt a weighted combination of two UCBs: $textUCBa$ allows every agent to additionally use the observations from the other agents to accelerate exploration.
We prove sub-linear upper bounds on both the cumulative regret and the number of communication rounds of FN-UCB, and use empirical experiments to demonstrate its competitive performances.
arXiv Detail & Related papers (2022-05-28T02:58:37Z) - Learning Contextual Bandits Through Perturbed Rewards [107.6210145983805]
We show that a $tildeO(tildedsqrtT)$ regret upper bound is still achievable under standard regularity conditions.
We perturb the rewards when updating the neural network to eliminate the need of explicit exploration.
arXiv Detail & Related papers (2022-01-24T19:10:22Z) - EE-Net: Exploitation-Exploration Neural Networks in Contextual Bandits [52.98326168071513]
"EE-Net" is a neural-based bandit approach with a novel exploration strategy.
We show that EE-Net achieves $mathcalO(sqrtTlog T)$ regret, which is tighter than existing state-of-the-art neural bandit algorithms.
arXiv Detail & Related papers (2021-10-07T04:12:36Z) - Convolutional Neural Bandit: Provable Algorithm for Visual-aware
Advertising [41.30283330958433]
Contextual multi-armed bandit has shown success in the application of advertising to solve the exploration-exploitation dilemma existed in the recommendation procedure.
Inspired by the visual-aware advertising, in this paper, we propose a contextual bandit algorithm.
arXiv Detail & Related papers (2021-07-02T03:02:29Z) - Neural Contextual Bandits with Deep Representation and Shallow
Exploration [105.8099566651448]
We propose a novel learning algorithm that transforms the raw feature vector using the last hidden layer of a deep ReLU neural network.
Compared with existing neural contextual bandit algorithms, our approach is computationally much more efficient since it only needs to explore in the last layer of the deep neural network.
arXiv Detail & Related papers (2020-12-03T09:17:55Z) - Neural Thompson Sampling [94.82847209157494]
We propose a new algorithm, called Neural Thompson Sampling, which adapts deep neural networks for both exploration and exploitation.
At the core of our algorithm is a novel posterior distribution of the reward, where its mean is the neural network approximator, and its variance is built upon the neural tangent features of the corresponding neural network.
arXiv Detail & Related papers (2020-10-02T07:44:09Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.