Low-Rank Generalized Linear Bandit Problems
- URL: http://arxiv.org/abs/2006.02948v2
- Date: Mon, 19 Oct 2020 17:52:43 GMT
- Title: Low-Rank Generalized Linear Bandit Problems
- Authors: Yangyi Lu, Amirhossein Meisami, Ambuj Tewari
- Abstract summary: In a low-rank linear bandit problem, the reward of an action is the inner product between the action and an unknown low-rank matrix $Theta*$.
We propose an algorithm based on a novel combination of online-to-confidence-set conversioncitepabbasi2012online and the exponentially weighted average forecaster constructed by a covering of low-rank matrices.
- Score: 19.052901817304743
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a low-rank linear bandit problem, the reward of an action (represented by
a matrix of size $d_1 \times d_2$) is the inner product between the action and
an unknown low-rank matrix $\Theta^*$. We propose an algorithm based on a novel
combination of online-to-confidence-set conversion~\citep{abbasi2012online} and
the exponentially weighted average forecaster constructed by a covering of
low-rank matrices. In $T$ rounds, our algorithm achieves
$\widetilde{O}((d_1+d_2)^{3/2}\sqrt{rT})$ regret that improves upon the
standard linear bandit regret bound of $\widetilde{O}(d_1d_2\sqrt{T})$ when the
rank of $\Theta^*$: $r \ll \min\{d_1,d_2\}$. We also extend our algorithmic
approach to the generalized linear setting to get an algorithm which enjoys a
similar bound under regularity conditions on the link function. To get around
the computational intractability of covering based approaches, we propose an
efficient algorithm by extending the "Explore-Subspace-Then-Refine" algorithm
of~\citet{jun2019bilinear}. Our efficient algorithm achieves
$\widetilde{O}((d_1+d_2)^{3/2}\sqrt{rT})$ regret under a mild condition on the
action set $\mathcal{X}$ and the $r$-th singular value of $\Theta^*$. Our upper
bounds match the conjectured lower bound of \cite{jun2019bilinear} for a
subclass of low-rank linear bandit problems. Further, we show that existing
lower bounds for the sparse linear bandit problem strongly suggest that our
regret bounds are unimprovable. To complement our theoretical contributions, we
also conduct experiments to demonstrate that our algorithm can greatly
outperform the performance of the standard linear bandit approach when
$\Theta^*$ is low-rank.
Related papers
- Low-rank Matrix Bandits with Heavy-tailed Rewards [55.03293214439741]
We study the problem of underlinelow-rank matrix bandit with underlineheavy-underlinetailed underlinerewards (LowHTR)
By utilizing the truncation on observed payoffs and the dynamic exploration, we propose a novel algorithm called LOTUS.
arXiv Detail & Related papers (2024-04-26T21:54:31Z) - Low-Rank Bandits via Tight Two-to-Infinity Singular Subspace Recovery [45.601316850669406]
We present efficient algorithms for policy evaluation, best policy identification and regret minimization.
For policy evaluation and best policy identification, we show that our algorithms are nearly minimax optimal.
All the proposed algorithms consist of two phases: they first leverage spectral methods to estimate the left and right singular subspaces of the low-rank reward matrix.
arXiv Detail & Related papers (2024-02-24T06:36:08Z) - Efficient Frameworks for Generalized Low-Rank Matrix Bandit Problems [61.85150061213987]
We study the generalized low-rank matrix bandit problem, proposed in citelu2021low under the Generalized Linear Model (GLM) framework.
To overcome the computational infeasibility and theoretical restrain of existing algorithms, we first propose the G-ESTT framework.
We show that G-ESTT can achieve the $tildeO(sqrt(d_1+d_2)3/2Mr3/2T)$ bound of regret while G-ESTS can achineve the $tildeO
arXiv Detail & Related papers (2024-01-14T14:14:19Z) - Best-of-Both-Worlds Algorithms for Linear Contextual Bandits [11.94312915280916]
We study best-of-both-worlds algorithms for $K$-armed linear contextual bandits.
Our algorithms deliver near-optimal regret bounds in both the adversarial and adversarial regimes.
arXiv Detail & Related papers (2023-12-24T08:27:30Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Corralling a Larger Band of Bandits: A Case Study on Switching Regret
for Linear Bandits [99.86860277006318]
We consider the problem of combining and learning over a set of adversarial algorithms with the goal of adaptively tracking the best one on the fly.
The CORRAL of Agarwal et al. achieves this goal with a regret overhead of order $widetildeO(sqrtd S T)$ where $M$ is the number of base algorithms and $T$ is the time horizon.
Motivated by this issue, we propose a new recipe to corral a larger band of bandit algorithms whose regret overhead has only emphlogarithmic dependence on $M$ as long
arXiv Detail & Related papers (2022-02-12T21:55:44Z) - Impact of Representation Learning in Linear Bandits [83.17684841392754]
We study how representation learning can improve the efficiency of bandit problems.
We present a new algorithm which achieves $widetildeO(TsqrtkN + sqrtdkNT)$ regret, where $N$ is the number of rounds we play for each bandit.
arXiv Detail & Related papers (2020-10-13T16:35:30Z) - Stochastic Bandits with Linear Constraints [69.757694218456]
We study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies.
We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB)
arXiv Detail & Related papers (2020-06-17T22:32:19Z) - Problem-Complexity Adaptive Model Selection for Stochastic Linear
Bandits [20.207989166682832]
We consider the problem of model selection for two popular linear bandit settings.
In the first setting, we consider the $K$ armed mixture bandits, where the mean reward of arm $i in [K]$ is $mu_i+ langle alpha_i,t,theta*|$.
We show that ALB achieves regret scaling of $O(|theta*|sqrtT)$, where $|theta*|$ is apriori unknown
arXiv Detail & Related papers (2020-06-04T02:19:00Z) - Efficient and Robust Algorithms for Adversarial Linear Contextual
Bandits [13.32560004325655]
We consider an adversarial variant of the classic $K$-armed linear contextual bandit problem.
Under the assumption that the $d$-dimensional contexts are generated i.i.d.at random, we develop efficient algorithms based on the classic Exp3 algorithm.
arXiv Detail & Related papers (2020-02-01T22:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.