Symmetric Linear Bandits with Hidden Symmetry
- URL: http://arxiv.org/abs/2405.13899v2
- Date: Wed, 30 Oct 2024 21:26:19 GMT
- Title: Symmetric Linear Bandits with Hidden Symmetry
- Authors: Nam Phuong Tran, The Anh Ta, Debmalya Mandal, Long Tran-Thanh,
- Abstract summary: We study high-dimensional symmetric linear bandits where the symmetry is hidden from the learner.
We provide a method based on model selection within the collection of low-dimensional subspaces.
- Score: 17.40632789343385
- License:
- Abstract: High-dimensional linear bandits with low-dimensional structure have received considerable attention in recent studies due to their practical significance. The most common structure in the literature is sparsity. However, it may not be available in practice. Symmetry, where the reward is invariant under certain groups of transformations on the set of arms, is another important inductive bias in the high-dimensional case that covers many standard structures, including sparsity. In this work, we study high-dimensional symmetric linear bandits where the symmetry is hidden from the learner, and the correct symmetry needs to be learned in an online setting. We examine the structure of a collection of hidden symmetry and provide a method based on model selection within the collection of low-dimensional subspaces. Our algorithm achieves a regret bound of $ O(d_0^{2/3} T^{2/3} \log(d))$, where $d$ is the ambient dimension which is potentially very large, and $d_0$ is the dimension of the true low-dimensional subspace such that $d_0 \ll d$. With an extra assumption on well-separated models, we can further improve the regret to $ O(d_0\sqrt{T\log(d)} )$.
Related papers
- Linear Convergence of Diffusion Models Under the Manifold Hypothesis [5.040884755454258]
We show that the number of steps to converge in Kullback-Leibler(KL) divergence is linear (up to logarithmic terms) in the intrinsic dimension $Leid$.
We also show that this linear dependency is sharp.
arXiv Detail & Related papers (2024-10-11T17:58:30Z) - Equivariant Symmetry Breaking Sets [0.6475999521931204]
Equivariant neural networks (ENNs) have been shown to be extremely effective in applications involving underlying symmetries.
We propose a novel symmetry breaking framework that is fully equivariant and is the first which fully addresses spontaneous symmetry breaking.
arXiv Detail & Related papers (2024-02-05T02:35:11Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Linear programming with unitary-equivariant constraints [2.0305676256390934]
Unitary equivariance is a natural symmetry that occurs in many contexts in physics and mathematics.
We show that under additional symmetry assumptions, this problem reduces to a linear program that can be solved in time that does not scale in $d$.
We also outline a possible route for extending our method to general unitary-equivariant semidefinite programs.
arXiv Detail & Related papers (2022-07-12T17:37:04Z) - Machine learning a manifold [0.0]
We propose a simple method to identify a continuous Lie algebra symmetry in a dataset through regression by an artificial neural network.
Our proposal takes advantage of the $ mathcalO(epsilon2)$ scaling of the output variable under infinitesimal symmetry transformations on the input variables.
arXiv Detail & Related papers (2021-12-14T19:00:00Z) - High-Dimensional Sparse Linear Bandits [67.9378546011416]
We derive a novel $Omega(n2/3)$ dimension-free minimax regret lower bound for sparse linear bandits in the data-poor regime.
We also prove a dimension-free $O(sqrtn)$ regret upper bound under an additional assumption on the magnitude of the signal for relevant features.
arXiv Detail & Related papers (2020-11-08T16:48:11Z) - Online Sparse Reinforcement Learning [60.44832065993122]
We investigate the hardness of online reinforcement learning in fixed horizon, sparse linear decision process (MDP)
We show that linear regret is generally unavoidable in this case, even if there exists a policy that collects well-conditioned data.
This shows that in the large-action setting, the difficulty of learning can be attributed to the difficulty of finding a good exploratory policy.
arXiv Detail & Related papers (2020-11-08T16:47:42Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z) - A deep network construction that adapts to intrinsic dimensionality
beyond the domain [79.23797234241471]
We study the approximation of two-layer compositions $f(x) = g(phi(x))$ via deep networks with ReLU activation.
We focus on two intuitive and practically relevant choices for $phi$: the projection onto a low-dimensional embedded submanifold and a distance to a collection of low-dimensional sets.
arXiv Detail & Related papers (2020-08-06T09:50:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.