Pure Exploration in Kernel and Neural Bandits
- URL: http://arxiv.org/abs/2106.12034v1
- Date: Tue, 22 Jun 2021 19:51:59 GMT
- Title: Pure Exploration in Kernel and Neural Bandits
- Authors: Yinglun Zhu, Dongruo Zhou, Ruoxi Jiang, Quanquan Gu, Rebecca Willett,
Robert Nowak
- Abstract summary: We study pure exploration in bandits, where the dimension of the feature representation can be much larger than the number of arms.
To overcome the curse of dimensionality, we propose to adaptively embed the feature representation of each arm into a lower-dimensional space.
- Score: 90.23165420559664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study pure exploration in bandits, where the dimension of the feature
representation can be much larger than the number of arms. To overcome the
curse of dimensionality, we propose to adaptively embed the feature
representation of each arm into a lower-dimensional space and carefully deal
with the induced model misspecifications. Our approach is conceptually very
different from existing works that can either only handle low-dimensional
linear bandits or passively deal with model misspecifications. We showcase the
application of our approach to two pure exploration settings that were
previously under-studied: (1) the reward function belongs to a possibly
infinite-dimensional Reproducing Kernel Hilbert Space, and (2) the reward
function is nonlinear and can be approximated by neural networks. Our main
results provide sample complexity guarantees that only depend on the effective
dimension of the feature spaces in the kernel or neural representations.
Extensive experiments conducted on both synthetic and real-world datasets
demonstrate the efficacy of our methods.
Related papers
- Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Exploring Linear Feature Disentanglement For Neural Networks [63.20827189693117]
Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs)
Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space.
This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs.
arXiv Detail & Related papers (2022-03-22T13:09:17Z) - Learning Low-Dimensional Nonlinear Structures from High-Dimensional
Noisy Data: An Integral Operator Approach [5.975670441166475]
We propose a kernel-spectral embedding algorithm for learning low-dimensional nonlinear structures from high-dimensional and noisy observations.
The algorithm employs an adaptive bandwidth selection procedure which does not rely on prior knowledge of the underlying manifold.
The obtained low-dimensional embeddings can be further utilized for downstream purposes such as data visualization, clustering and prediction.
arXiv Detail & Related papers (2022-02-28T22:46:34Z) - Linear approximability of two-layer neural networks: A comprehensive
analysis based on spectral decay [4.042159113348107]
We first consider the case of single neuron and show that the linear approximability, quantified by the Kolmogorov width, is controlled by the eigenvalue decay of an associate kernel.
We show that similar results also hold for two-layer neural networks.
arXiv Detail & Related papers (2021-08-10T23:30:29Z) - Provable Model-based Nonlinear Bandit and Reinforcement Learning: Shelve
Optimism, Embrace Virtual Curvature [61.22680308681648]
We show that global convergence is statistically intractable even for one-layer neural net bandit with a deterministic reward.
For both nonlinear bandit and RL, the paper presents a model-based algorithm, Virtual Ascent with Online Model Learner (ViOL)
arXiv Detail & Related papers (2021-02-08T12:41:56Z) - Dynamics of two-dimensional open quantum lattice models with tensor
networks [0.0]
We develop a tensor network method, based on an infinite Projected Entangled Pair Operator (iPEPO) ansatz, applicable directly in the thermodynamic limit.
We consider dissipative transverse quantum Ising and driven-dissipative hard core boson models in non-mean field limits.
Our method enables to study regimes which are accessible to current experiments but lie well beyond the applicability of existing techniques.
arXiv Detail & Related papers (2020-12-22T18:24:20Z) - Multi-fidelity data fusion for the approximation of scalar functions
with low intrinsic dimensionality through active subspaces [0.0]
We present a multi-fidelity approach involving active subspaces and we test it on two different high-dimensional benchmarks.
In this work we present a multi-fidelity approach involving active subspaces and we test it on two different high-dimensional benchmarks.
arXiv Detail & Related papers (2020-10-16T12:35:49Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Sample complexity and effective dimension for regression on manifolds [13.774258153124205]
We consider the theory of regression on a manifold using kernel reproducing Hilbert space methods.
We show that certain spaces of smooth functions on a manifold are effectively finite-dimensional, with a complexity that scales according to the manifold dimension.
arXiv Detail & Related papers (2020-06-13T14:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.