Variable Hyperparameterized Gaussian Kernel using Displaced Squeezed Vacuum State
- URL: http://arxiv.org/abs/2403.11560v1
- Date: Mon, 18 Mar 2024 08:25:56 GMT
- Title: Variable Hyperparameterized Gaussian Kernel using Displaced Squeezed Vacuum State
- Authors: Vivek Mehta, Utpal Roy,
- Abstract summary: A multimode coherent state can generate the Gaussian kernel with a constant value of hyper parameter.
This constant hyper parameter has limited the application of the Gaussian kernel when it is applied to complex learning problems.
We realize the variable hyper parameterized kernel with a multimode-displaced squeezed vacuum state.
- Score: 2.1408617023874443
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: There are schemes for realizing different types of kernels by quantum states of light. It is particularly interesting to realize the Gaussian kernel due to its wider applicability. A multimode coherent state can generate the Gaussian kernel with a constant value of hyperparameter. This constant hyperparameter has limited the application of the Gaussian kernel when it is applied to complex learning problems. We realize the variable hyperparameterized Gaussian kernel with a multimode-displaced squeezed vacuum state. The learning capacity of this kernel is tested with the support vector machines over some synthesized data sets as well as public benchmark data sets. We establish that the proposed variable hyperparameterized Gaussian kernel offers better accuracy over the constant Gaussian kernel.
Related papers
- New random projections for isotropic kernels using stable spectral distributions [0.0]
We decompose spectral kernel distributions as a scale mixture of $alpha$-stable random vectors.
Results have broad applications for support vector machines, kernel ridge regression, and other kernel-based machine learning techniques.
arXiv Detail & Related papers (2024-11-05T03:28:01Z) - Wiener Chaos in Kernel Regression: Towards Untangling Aleatoric and Epistemic Uncertainty [0.0]
We generalize the setting and consider kernel ridge regression with additive i.i.d. nonGaussian measurement noise.
We show that our approach allows us to distinguish the uncertainty that stems from the noise in the data samples from the total uncertainty encoded in the GP posterior distribution.
arXiv Detail & Related papers (2023-12-12T16:02:35Z) - The Schr\"odinger Bridge between Gaussian Measures has a Closed Form [101.79851806388699]
We focus on the dynamic formulation of OT, also known as the Schr"odinger bridge (SB) problem.
In this paper, we provide closed-form expressions for SBs between Gaussian measures.
arXiv Detail & Related papers (2022-02-11T15:59:01Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - Hida-Mat\'ern Kernel [8.594140167290098]
We present the class of Hida-Mat'ern kernels, which is the canonical family of covariance functions over the entire space of stationary Gauss-Markov Processes.
We show how to represent such processes as state space models using only the kernel and its derivatives.
We also show how exploiting special properties of the state space representation enables improved numerical stability in addition to further reductions of computational complexity.
arXiv Detail & Related papers (2021-07-15T03:25:10Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Efficient construction of tensor-network representations of many-body
Gaussian states [59.94347858883343]
We present a procedure to construct tensor-network representations of many-body Gaussian states efficiently and with a controllable error.
These states include the ground and thermal states of bosonic and fermionic quadratic Hamiltonians, which are essential in the study of quantum many-body systems.
arXiv Detail & Related papers (2020-08-12T11:30:23Z) - Strong Uniform Consistency with Rates for Kernel Density Estimators with
General Kernels on Manifolds [11.927892660941643]
We show how to handle kernel density estimation with intricate kernels not designed by the user.
The isotropic kernels considered in this paper are different from the kernels in the Vapnik-Chervonenkis class that are frequently considered in statistics society.
arXiv Detail & Related papers (2020-07-13T14:36:06Z) - The Statistical Cost of Robust Kernel Hyperparameter Tuning [20.42751031392928]
We study the statistical complexity of kernel hyperparameter tuning in the setting of active regression under adversarial noise.
We provide finite-sample guarantees for the problem, characterizing how increasing the complexity of the kernel class increases the complexity of learning kernel hyper parameters.
arXiv Detail & Related papers (2020-06-14T21:56:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.