Second Order Regret Bounds Against Generalized Expert Sequences under
Partial Bandit Feedback
- URL: http://arxiv.org/abs/2204.06660v1
- Date: Wed, 13 Apr 2022 22:48:12 GMT
- Title: Second Order Regret Bounds Against Generalized Expert Sequences under
Partial Bandit Feedback
- Authors: Kaan Gokcesu, Hakan Gokcesu
- Abstract summary: We study the problem of expert advice under partial bandit feedback setting and create a sequential minimax optimal algorithm.
Our algorithm works with a more general partial monitoring setting, where, in contrast to the classical bandit feedback, the losses can be revealed in an adversarial manner.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of expert advice under partial bandit feedback setting
and create a sequential minimax optimal algorithm. Our algorithm works with a
more general partial monitoring setting, where, in contrast to the classical
bandit feedback, the losses can be revealed in an adversarial manner. Our
algorithm adopts a universal prediction perspective, whose performance is
analyzed with regret against a general expert selection sequence. The regret we
study is against a general competition class that covers many settings (such as
the switching or contextual experts settings) and the expert selection
sequences in the competition class are determined by the application at hand.
Our regret bounds are second order bounds in terms of the sum of squared losses
and the normalized regret of our algorithm is invariant under arbitrary affine
transforms of the loss sequence. Our algorithm is truly online and does not use
any preliminary information about the loss sequences.
Related papers
- Non-stochastic Bandits With Evolving Observations [47.61533665679308]
We introduce a novel online learning framework that unifies and generalizes pre-established models.
We propose regret minimization algorithms for both the full-information and bandit settings.
Our algorithms match the known regret bounds across many special cases, while also introducing previously unknown bounds.
arXiv Detail & Related papers (2024-05-27T05:32:46Z) - Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback [58.66941279460248]
Learning from human feedback plays an important role in aligning generative models, such as large language models (LLM)
We study a model within this problem domain--contextual dueling bandits with adversarial feedback, where the true preference label can be flipped by an adversary.
We propose an algorithm namely robust contextual dueling bandit (algo), which is based on uncertainty-weighted maximum likelihood estimation.
arXiv Detail & Related papers (2024-04-16T17:59:55Z) - Data Dependent Regret Guarantees Against General Comparators for Full or
Bandit Feedback [0.0]
We study the adversarial online learning problem and create a completely online algorithmic framework that has data dependent regret guarantees.
Our algorithm works from a universal prediction perspective and the performance measure used is the expected regret against arbitrary comparator sequences.
arXiv Detail & Related papers (2023-03-12T00:18:46Z) - Optimal Tracking in Prediction with Expert Advice [0.0]
We study the prediction with expert advice setting, where the aim is to produce a decision by combining the decisions generated by a set of experts.
We achieve the min-max optimal dynamic regret under the prediction with expert advice setting.
Our algorithms are the first to produce such universally optimal, adaptive and truly online guarantees with no prior knowledge.
arXiv Detail & Related papers (2022-08-07T12:29:54Z) - Generalized Translation and Scale Invariant Online Algorithm for
Adversarial Multi-Armed Bandits [0.0]
We study the adversarial multi-armed bandit problem and create a completely online algorithmic framework that is invariant under arbitrary translations and scales of the arm losses.
Our algorithm works from a universal prediction perspective and the performance measure used is the expected regret against arbitrary arm selection sequences.
arXiv Detail & Related papers (2021-09-19T20:13:59Z) - Bayesian decision-making under misspecified priors with applications to
meta-learning [64.38020203019013]
Thompson sampling and other sequential decision-making algorithms are popular approaches to tackle explore/exploit trade-offs in contextual bandits.
We show that performance degrades gracefully with misspecified priors.
arXiv Detail & Related papers (2021-07-03T23:17:26Z) - A Generalized Online Algorithm for Translation and Scale Invariant
Prediction with Expert Advice [0.0]
We study the expected regret of our algorithm against a generic competition class in the sequential prediction by expert advice problem.
Our regret bounds are stable under arbitrary scalings and translations of the losses.
arXiv Detail & Related papers (2020-09-09T15:45:28Z) - Comparator-adaptive Convex Bandits [77.43350984086119]
We develop convex bandit algorithms with regret bounds that are small whenever the norm of the comparator is small.
We extend the ideas to convex bandits with Lipschitz or smooth loss functions.
arXiv Detail & Related papers (2020-07-16T16:33:35Z) - Adaptive Discretization for Adversarial Lipschitz Bandits [85.39106976861702]
Lipschitz bandits is a prominent version of multi-armed bandits that studies large, structured action spaces.
A central theme here is the adaptive discretization of the action space, which gradually zooms in'' on the more promising regions.
We provide the first algorithm for adaptive discretization in the adversarial version, and derive instance-dependent regret bounds.
arXiv Detail & Related papers (2020-06-22T16:06:25Z) - Beyond UCB: Optimal and Efficient Contextual Bandits with Regression
Oracles [112.89548995091182]
We provide the first universal and optimal reduction from contextual bandits to online regression.
Our algorithm requires no distributional assumptions beyond realizability, and works even when contexts are chosen adversarially.
arXiv Detail & Related papers (2020-02-12T11:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.