Understanding the stochastic dynamics of sequential decision-making
processes: A path-integral analysis of multi-armed bandits
- URL: http://arxiv.org/abs/2208.06245v2
- Date: Sat, 10 Jun 2023 13:06:06 GMT
- Title: Understanding the stochastic dynamics of sequential decision-making
processes: A path-integral analysis of multi-armed bandits
- Authors: Bo Li and Chi Ho Yeung
- Abstract summary: The multi-armed bandit (MAB) model is one of the most popular models to study decision-making in an uncertain environment.
In this paper, we employ techniques in statistical physics to analyze the MAB model.
- Score: 7.05949591248206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The multi-armed bandit (MAB) model is one of the most classical models to
study decision-making in an uncertain environment. In this model, a player
chooses one of $K$ possible arms of a bandit machine to play at each time step,
where the corresponding arm returns a random reward to the player, potentially
from a specific unknown distribution. The target of the player is to collect as
many rewards as possible during the process. Despite its simplicity, the MAB
model offers an excellent playground for studying the trade-off between
exploration versus exploitation and designing effective algorithms for
sequential decision-making under uncertainty. Although many asymptotically
optimal algorithms have been established, the finite-time behaviors of the
stochastic dynamics of the MAB model appear much more challenging to analyze,
due to the intertwine between the decision-making and the rewards being
collected. In this paper, we employ techniques in statistical physics to
analyze the MAB model, which facilitates the characterization of the
distribution of cumulative regrets at a finite short time, the central quantity
of interest in an MAB algorithm, as well as the intricate dynamical behaviors
of the model. Our analytical results, in good agreement with simulations, point
to the emergence of an interesting multimodal regret distribution, with large
regrets resulting from excess exploitation of sub-optimal arms due to an
initial unlucky output from the optimal one.
Related papers
- A Bandit Approach with Evolutionary Operators for Model Selection [0.4604003661048266]
This work formulates model selection as an infinite-armed bandit problem, namely, a problem in which a decision maker iteratively selects one of an infinite number of fixed choices (i.e., arms)
The arms are machine learning models to train and selecting an arm corresponds to a partial training of the model (resource allocation)
We propose the algorithm Mutant-UCB that incorporates operators from evolutionary algorithms into the UCB-E bandit algorithm introduced by Audiber et al.
Tests carried out on three open source image classification data sets attest to the relevance of this novel combining approach, which outperforms the state-of
arXiv Detail & Related papers (2024-02-07T08:01:45Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - ShuttleSHAP: A Turn-Based Feature Attribution Approach for Analyzing
Forecasting Models in Badminton [52.21869064818728]
Deep learning approaches for player tactic forecasting in badminton show promising performance partially attributed to effective reasoning about rally-player interactions.
We propose a turn-based feature attribution approach, ShuttleSHAP, for analyzing forecasting models in badminton based on variants of Shapley values.
arXiv Detail & Related papers (2023-12-18T05:37:51Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - Complexity Analysis of a Countable-armed Bandit Problem [9.163501953373068]
We study the classical problem of minimizing the expected cumulative regret over a horizon of play $n$.
We propose algorithms that achieve a rate-optimal finite-time instance-dependent regret of $mathcalOleft( log n right)$ when $K=2$.
While the order of regret and complexity of the problem suggests a great degree of similarity to the classical MAB problem, properties of the performance bounds and salient aspects of algorithm design are quite distinct from the latter.
arXiv Detail & Related papers (2023-01-18T00:53:46Z) - Exploration in Model-based Reinforcement Learning with Randomized Reward [40.87376174638752]
We show that under the kernelized linear regulator (KNR) model, reward randomization guarantees a partial optimism.
We further extend our theory to generalized function approximation and identified conditions for reward randomization to attain provably efficient exploration.
arXiv Detail & Related papers (2023-01-09T01:50:55Z) - Sample Efficient Reinforcement Learning via Model-Ensemble Exploration
and Exploitation [3.728946517493471]
MEEE is a model-ensemble method that consists of optimistic exploration and weighted exploitation.
Our approach outperforms other model-free and model-based state-of-the-art methods, especially in sample complexity.
arXiv Detail & Related papers (2021-07-05T07:18:20Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z) - Thompson Sampling Algorithms for Mean-Variance Bandits [97.43678751629189]
We develop Thompson Sampling-style algorithms for mean-variance MAB.
We also provide comprehensive regret analyses for Gaussian and Bernoulli bandits.
Our algorithms significantly outperform existing LCB-based algorithms for all risk tolerances.
arXiv Detail & Related papers (2020-02-01T15:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.