On High-dimensional and Low-rank Tensor Bandits
- URL: http://arxiv.org/abs/2305.03884v1
- Date: Sat, 6 May 2023 00:43:36 GMT
- Title: On High-dimensional and Low-rank Tensor Bandits
- Authors: Chengshuai Shi, Cong Shen, Nicholas D. Sidiropoulos
- Abstract summary: This work studies a general tensor bandits model, where actions and system parameters are represented by tensors as opposed to vectors.
A novel bandit algorithm, coined TOFU (Tensor Optimism in the Face of Uncertainty), is developed.
Theoretical analyses show that TOFU improves the best-known regret upper bound by a multiplicative factor that grows exponentially in the system order.
- Score: 53.0829344775769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing studies on linear bandits focus on the one-dimensional
characterization of the overall system. While being representative, this
formulation may fail to model applications with high-dimensional but favorable
structures, such as the low-rank tensor representation for recommender systems.
To address this limitation, this work studies a general tensor bandits model,
where actions and system parameters are represented by tensors as opposed to
vectors, and we particularly focus on the case that the unknown system tensor
is low-rank. A novel bandit algorithm, coined TOFU (Tensor Optimism in the Face
of Uncertainty), is developed. TOFU first leverages flexible tensor regression
techniques to estimate low-dimensional subspaces associated with the system
tensor. These estimates are then utilized to convert the original problem to a
new one with norm constraints on its system parameters. Lastly, a
norm-constrained bandit subroutine is adopted by TOFU, which utilizes these
constraints to avoid exploring the entire high-dimensional parameter space.
Theoretical analyses show that TOFU improves the best-known regret upper bound
by a multiplicative factor that grows exponentially in the system order. A
novel performance lower bound is also established, which further corroborates
the efficiency of TOFU.
Related papers
- Low-Rank Tensor Learning by Generalized Nonconvex Regularization [25.115066273660478]
We study the problem of low-rank tensor learning, where only a few samples are observed the underlying tensor.
A family of non tensor learning functions are employed to characterize the low-rankness of the underlying tensor.
An algorithm designed to solve the resulting majorization-minimization is proposed.
arXiv Detail & Related papers (2024-10-24T03:33:20Z) - Irregular Tensor Low-Rank Representation for Hyperspectral Image Representation [71.69331824668954]
Spectral variations pose a common challenge in analyzing hyperspectral images (HSI)
Low-rank tensor representation has emerged as a robust strategy, leveraging inherent correlations within HSI data.
We propose a novel model for irregular tensor lowrank representation tailored to efficiently model irregular 3D cubes.
arXiv Detail & Related papers (2024-10-24T02:56:22Z) - Efficient Generalized Low-Rank Tensor Contextual Bandits [14.016197224603433]
We introduce a low-rank contextual bandits model in which an action is formed from three feature vectors, and thus can be represented by a tensor.
To effectively achieve the trade-off between exploration and exploitation, we introduce a novel algorithm called "Generalized Low-Rank Subspace then Refine" (G-LowTESTR)
Rigorous theoretical analysis shows that the regret bound of G-LowTESTR is superior to those in vectorization and matricization cases.
arXiv Detail & Related papers (2023-11-03T08:12:05Z) - Low-Rank Tensor Completion via Novel Sparsity-Inducing Regularizers [30.920908325825668]
To alleviate l1-norm in the low-rank tensor completion problem, non-rank surrogates/regularizers have been suggested.
These regularizers are applied to nuclear-rank restoration, and efficient algorithms based on the method of multipliers are proposed.
arXiv Detail & Related papers (2023-10-10T01:00:13Z) - Tight Certified Robustness via Min-Max Representations of ReLU Neural
Networks [9.771011198361865]
The reliable deployment of neural networks in control systems requires rigorous robustness guarantees.
In this paper, we obtain tight robustness certificates over convex representations of ReLU neural networks.
arXiv Detail & Related papers (2023-10-07T21:07:45Z) - Out-of-distributional risk bounds for neural operators with applications
to the Helmholtz equation [6.296104145657063]
Existing neural operators (NOs) do not necessarily perform well for all physics problems.
We propose a subfamily of NOs enabling an enhanced empirical approximation of the nonlinear operator mapping wave speed to solution.
Our experiments reveal certain surprises in the generalization and the relevance of introducing depth.
We conclude by proposing a hypernetwork version of the subfamily of NOs as a surrogate model for the mentioned forward operator.
arXiv Detail & Related papers (2023-01-27T03:02:12Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - A Unifying Theory of Thompson Sampling for Continuous Risk-Averse
Bandits [91.3755431537592]
This paper unifies the analysis of risk-averse Thompson sampling algorithms for the multi-armed bandit problem.
Using the contraction principle in the theory of large deviations, we prove novel concentration bounds for continuous risk functionals.
We show that a wide class of risk functionals as well as "nice" functions of them satisfy the continuity condition.
arXiv Detail & Related papers (2021-08-25T17:09:01Z) - Regularized OFU: an Efficient UCB Estimator forNon-linear Contextual
Bandit [90.0208037317206]
Balancing exploration and exploitation (EE) is a fundamental problem in contex-tual bandit.
We propose a novel OFU algorithm namedregularized OFU(ROFU)
arXiv Detail & Related papers (2021-06-29T07:28:15Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.