Kernel Learning by quantum annealer
- URL: http://arxiv.org/abs/2304.10144v1
- Date: Thu, 20 Apr 2023 08:08:03 GMT
- Title: Kernel Learning by quantum annealer
- Authors: Yasushi Hasegawa, Hiroki Oshiyama and Masayuki Ohzeki
- Abstract summary: We propose an application of the Boltzmann machine to the kernel matrix used in various machine-learning techniques.
We show that it is possible to create a spectral distribution that could not be feasible with the Gaussian distribution.
- Score: 0.966840768820136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Boltzmann machine is one of the various applications using quantum
annealer. We propose an application of the Boltzmann machine to the kernel
matrix used in various machine-learning techniques. We focus on the fact that
shift-invariant kernel functions can be expressed in terms of the expected
value of a spectral distribution by the Fourier transformation. Using this
transformation, random Fourier feature (RFF) samples the frequencies and
approximates the kernel function. In this paper, furthermore, we propose a
method to obtain a spectral distribution suitable for the data using a
Boltzmann machine. As a result, we show that the prediction accuracy is
comparable to that of the method using the Gaussian distribution. We also show
that it is possible to create a spectral distribution that could not be
feasible with the Gaussian distribution.
Related papers
- New random projections for isotropic kernels using stable spectral distributions [0.0]
We decompose spectral kernel distributions as a scale mixture of $alpha$-stable random vectors.
Results have broad applications for support vector machines, kernel ridge regression, and other kernel-based machine learning techniques.
arXiv Detail & Related papers (2024-11-05T03:28:01Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Efficient quantum loading of probability distributions through Feynman
propagators [2.56711111236449]
We present quantum algorithms for the loading of probability distributions using Hamiltonian simulation for one dimensional Hamiltonians of the form $hat H= Delta + V(x) mathbbI$.
We consider the potentials $V(x)$ for which the Feynman propagator is known to have an analytically closed form and utilize these Hamiltonians to load probability distributions into quantum states.
arXiv Detail & Related papers (2023-11-22T21:41:58Z) - Solving High Frequency and Multi-Scale PDEs with Gaussian Processes [18.190228010565367]
PINNs often struggle to solve high-frequency and multi-scale PDEs.
We resort to the Gaussian process (GP) framework to solve this problem.
We use Kronecker product properties and multilinear algebra to promote computational efficiency and scalability.
arXiv Detail & Related papers (2023-11-08T05:26:58Z) - Variational Autoencoder Kernel Interpretation and Selection for
Classification [59.30734371401315]
This work proposed kernel selection approaches for probabilistic classifiers based on features produced by the convolutional encoder of a variational autoencoder.
In the proposed implementation, each latent variable was sampled from the distribution associated with a single kernel of the last encoder's convolution layer, as an individual distribution was created for each kernel.
choosing relevant features on the sampled latent variables makes it possible to perform kernel selection, filtering the uninformative features and kernels.
arXiv Detail & Related papers (2022-09-10T17:22:53Z) - Transformer with Fourier Integral Attentions [18.031977028559282]
We propose a new class of transformers in which the dot-product kernels are replaced by the novel generalized Fourier integral kernels.
Compared to the conventional transformers with dot-product attention, FourierFormers attain better accuracy and reduce the redundancy between attention heads.
We empirically corroborate the advantages of FourierFormers over the baseline transformers in a variety of practical applications including language modeling and image classification.
arXiv Detail & Related papers (2022-06-01T03:06:21Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Sigma-Delta and Distributed Noise-Shaping Quantization Methods for
Random Fourier Features [73.25551965751603]
We prove that our quantized RFFs allow a high accuracy approximation of the underlying kernels.
We show that the quantized RFFs can be further compressed, yielding an excellent trade-off between memory use and accuracy.
We empirically show by testing the performance of our methods on several machine learning tasks that our method compares favorably to other state of the art quantization methods in this context.
arXiv Detail & Related papers (2021-06-04T17:24:47Z) - Quantization Algorithms for Random Fourier Features [25.356048456005023]
The method of random Fourier features (RFF) has also become popular, for approximating the Gaussian kernel.
RFF applies a specific nonlinear transformation on the projected data from random projections.
In this paper, we focus on developing quantization algorithms for RFF.
arXiv Detail & Related papers (2021-02-25T18:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.