A Short Note on Soft-max and Policy Gradients in Bandits Problems
- URL: http://arxiv.org/abs/2007.10297v1
- Date: Mon, 20 Jul 2020 17:30:27 GMT
- Title: A Short Note on Soft-max and Policy Gradients in Bandits Problems
- Authors: Neil Walton
- Abstract summary: We give a short argument that gives a regret bound for the soft-max ordinary differential equation for bandit problems.
We derive a similar result for a different policy gradient algorithm, again for bandit problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This is a short communication on a Lyapunov function argument for softmax in
bandit problems. There are a number of excellent papers coming out using
differential equations for policy gradient algorithms in reinforcement learning
\cite{agarwal2019optimality,bhandari2019global,mei2020global}. We give a short
argument that gives a regret bound for the soft-max ordinary differential
equation for bandit problems. We derive a similar result for a different policy
gradient algorithm, again for bandit problems. For this second algorithm, it is
possible to prove regret bounds in the stochastic case \cite{DW20}. At the end,
we summarize some ideas and issues on deriving stochastic regret bounds for
policy gradients.
Related papers
- Contextual Bandits with Packing and Covering Constraints: A Modular Lagrangian Approach via Regression [65.8785736964253]
We consider contextual bandits with linear constraints (CBwLC), a variant of contextual bandits in which the algorithm consumes multiple resources subject to linear constraints on total consumption.
This problem generalizes contextual bandits with knapsacks (CBwK), allowing for packing and covering constraints, as well as positive and negative resource consumption.
We provide the first algorithm for CBwLC (or CBwK) that is based on regression oracles. The algorithm is simple, computationally efficient, and statistically optimal under mild assumptions.
arXiv Detail & Related papers (2022-11-14T16:08:44Z) - Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear
Bandit Algorithms [39.70492757288025]
We address the contextual linear bandit problem, where a decision maker is provided a context.
We show that the contextual problem can be solved as a linear bandit problem.
Our results imply a $O(dsqrtTlog T)$ high-probability regret bound for contextual linear bandits.
arXiv Detail & Related papers (2022-11-08T22:18:53Z) - Complete Policy Regret Bounds for Tallying Bandits [51.039677652803675]
Policy regret is a well established notion of measuring the performance of an online learning algorithm against an adaptive adversary.
We study restrictions on the adversary that enable efficient minimization of the emphcomplete policy regret
We provide an algorithm that w.h.p a complete policy regret guarantee of $tildemathcalO(mKsqrtT)$, where the $tildemathcalO$ notation hides only logarithmic factors.
arXiv Detail & Related papers (2022-04-24T03:10:27Z) - Enhancing Classifier Conservativeness and Robustness by Polynomiality [23.099278014212146]
We show howconditionality can remedy the situation.
A directly related, simple, yet important technical novelty we subsequently present is softRmax.
We show that two aspects of softRmax, conservativeness and inherent robustness, lead to adversarial regularization.
arXiv Detail & Related papers (2022-03-23T19:36:19Z) - Instance-Dependent Regret Analysis of Kernelized Bandits [19.252319300590653]
We study the kernelized bandit problem, that involves designing an adaptive strategy for querying a noisy zeroth-order-oracle.
We derive emphinstance-dependent regret lower bounds for algorithms with uniformly(over the function class) vanishing normalized cumulative regret.
arXiv Detail & Related papers (2022-03-12T00:53:59Z) - Risk and optimal policies in bandit experiments [0.0]
This paper provides a decision theoretic analysis of bandit experiments.
The bandit setting corresponds to a dynamic programming problem, but solving this directly is typically infeasible.
For normally distributed rewards, the minimal Bayes risk can be characterized as the solution to a nonlinear second-order partial differential equation.
arXiv Detail & Related papers (2021-12-13T00:41:19Z) - Efficient and Optimal Algorithms for Contextual Dueling Bandits under
Realizability [59.81339109121384]
We study the $K$ contextual dueling bandit problem, a sequential decision making setting in which the learner uses contextual information to make two decisions, but only observes emphpreference-based feedback suggesting that one decision was better than the other.
We provide a new algorithm that achieves the optimal regret rate for a new notion of best response regret, which is a strictly stronger performance measure than those considered in prior works.
arXiv Detail & Related papers (2021-11-24T07:14:57Z) - Optimal Gradient-based Algorithms for Non-concave Bandit Optimization [76.57464214864756]
This work considers a large family of bandit problems where the unknown underlying reward function is non-concave.
Our algorithms are based on a unified zeroth-order optimization paradigm that applies in great generality.
We show that the standard optimistic algorithms are sub-optimal by dimension factors.
arXiv Detail & Related papers (2021-07-09T16:04:24Z) - Upper Confidence Bounds for Combining Stochastic Bandits [52.10197476419621]
We provide a simple method to combine bandit algorithms.
Our approach is based on a "meta-UCB" procedure that treats each of $N$ individual bandit algorithms as arms in a higher-level $N$-armed bandit problem.
arXiv Detail & Related papers (2020-12-24T05:36:29Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.