Regularised Least-Squares Regression with Infinite-Dimensional Output
Space
- URL: http://arxiv.org/abs/2010.10973v7
- Date: Wed, 16 Feb 2022 06:11:15 GMT
- Title: Regularised Least-Squares Regression with Infinite-Dimensional Output
Space
- Authors: Junhyunng Park and Krikamol Muandet
- Abstract summary: This report presents some learning theory results on vector-valued reproducing kernel Hilbert space (RKHS) regression.
Our approach is based on the integral operator technique using spectral theory for non-compact operators.
- Score: 8.556182714884567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This short technical report presents some learning theory results on
vector-valued reproducing kernel Hilbert space (RKHS) regression, where the
input space is allowed to be non-compact and the output space is a (possibly
infinite-dimensional) Hilbert space. Our approach is based on the integral
operator technique using spectral theory for non-compact operators. We place a
particular emphasis on obtaining results with as few assumptions as possible;
as such we only use Chebyshev's inequality, and no effort is made to obtain the
best rates or constants.
Related papers
- Towards Optimal Sobolev Norm Rates for the Vector-Valued Regularized Least-Squares Algorithm [30.08981916090924]
We present the first optimal rates for infinite-dimensional vector-valued ridge regression on a continuous scale of norms that interpolate between $L$ and the hypothesis space.
We show that these rates are optimal in most cases and independent of the dimension of the output space.
arXiv Detail & Related papers (2023-12-12T11:48:56Z) - Localisation of Regularised and Multiview Support Vector Machine Learning [0.0]
We prove a few representer approximations for a localised version of the regularised multiview support vector machine learning problem introduced by H.Q., L. Bazzani, and V. Mur.
arXiv Detail & Related papers (2023-04-12T07:19:02Z) - Recursive Estimation of Conditional Kernel Mean Embeddings [0.0]
Kernel mean embeddings map probability distributions to elements of a kernel reproducing Hilbert space (RKHS)
We present a new algorithm to estimate the conditional kernel mean map in a Hilbert space valued $L$ space, that is in a Bochner space.
arXiv Detail & Related papers (2023-02-12T16:55:58Z) - Continuous percolation in a Hilbert space for a large system of qubits [58.720142291102135]
The percolation transition is defined through the appearance of the infinite cluster.
We show that the exponentially increasing dimensionality of the Hilbert space makes its covering by finite-size hyperspheres inefficient.
Our approach to the percolation transition in compact metric spaces may prove useful for its rigorous treatment in other contexts.
arXiv Detail & Related papers (2022-10-15T13:53:21Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Optimal Learning Rates for Regularized Least-Squares with a Fourier
Capacity Condition [14.910167993978487]
We derive minimax adaptive rates for a new, broad class of Tikhonov-regularized learning problems in Hilbert scales.
We demonstrate that the spectrum of the Mercer operator can be inferred in the presence of tight'' embeddings of suitable Hilbert scales.
arXiv Detail & Related papers (2022-04-16T18:32:33Z) - Constrained mixers for the quantum approximate optimization algorithm [55.41644538483948]
We present a framework for constructing mixing operators that restrict the evolution to a subspace of the full Hilbert space.
We generalize the "XY"-mixer designed to preserve the subspace of "one-hot" states to the general case of subspaces given by a number of computational basis states.
Our analysis also leads to valid Trotterizations for "XY"-mixer with fewer CX gates than is known to date.
arXiv Detail & Related papers (2022-03-11T17:19:26Z) - Nystr\"om Kernel Mean Embeddings [92.10208929236826]
We propose an efficient approximation procedure based on the Nystr"om method.
It yields sufficient conditions on the subsample size to obtain the standard $n-1/2$ rate.
We discuss applications of this result for the approximation of the maximum mean discrepancy and quadrature rules.
arXiv Detail & Related papers (2022-01-31T08:26:06Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable
Approach for Continuous Markov Random Fields [53.31927549039624]
We show that a piecewise discretization preserves better contrast from existing discretization problems.
We apply this theory to the problem of matching two images.
arXiv Detail & Related papers (2021-07-13T12:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.