Scale invariant process regression
- URL: http://arxiv.org/abs/2208.10461v1
- Date: Mon, 22 Aug 2022 17:32:33 GMT
- Title: Scale invariant process regression
- Authors: Matthias Wieler
- Abstract summary: We propose a novel regression method that does not require specification of a kernel, length scale, variance, nor prior mean.
Experiments show that it is possible to derive a working machine learning method by assuming nothing but regularity and scale- and translation invariance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian processes are the leading method for non-parametric regression on
small to medium datasets. One main challenge is the choice of kernel and
optimization of hyperparameters. We propose a novel regression method that does
not require specification of a kernel, length scale, variance, nor prior mean.
Its only hyperparameter is the assumed regularity (degree of differentiability)
of the true function.
We achieve this with a novel non-Gaussian stochastic process that we
construct from minimal assumptions of translation and scale invariance. The
process can be thought of as a hierarchical Gaussian process model, where the
hyperparameters have been incorporated into the process itself. To perform
inference with this process we develop the required mathematical tools.
It turns out that for interpolation, the posterior is a t-process with a
polyharmonic spline as mean. For regression, we state the exact posterior and
find its mean (again a polyharmonic spline) and approximate variance with a
sampling method. Experiments show a performance equal to that of Gaussian
processes with optimized hyperparameters.
The most important insight is that it is possible to derive a working machine
learning method by assuming nothing but regularity and scale- and translation
invariance, without any other model assumptions.
Related papers
- Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data [17.657917523817243]
We develop and analyze algorithms for instrumental variable regression by viewing the problem as a conditional optimization problem.
In the context of least-squares instrumental variable regression, our algorithms neither require matrix inversions nor mini-batches.
We derive rates of convergence in expectation, that are of order $mathcalO(log T/T)$ and $mathcalO (1/T1-iota)$ for any $iota>0$.
arXiv Detail & Related papers (2024-05-29T19:21:55Z) - Implicit Manifold Gaussian Process Regression [49.0787777751317]
Gaussian process regression is widely used to provide well-calibrated uncertainty estimates.
It struggles with high-dimensional data because of the implicit low-dimensional manifold upon which the data actually lies.
In this paper we propose a technique capable of inferring implicit structure directly from data (labeled and unlabeled) in a fully differentiable way.
arXiv Detail & Related papers (2023-10-30T09:52:48Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - Gauss-Legendre Features for Gaussian Process Regression [7.37712470421917]
We present a Gauss-Legendre quadrature based approach for scaling up Gaussian process regression via a low rank approximation of the kernel matrix.
Our method is very much inspired by the well-known random Fourier features approach, which also builds low-rank approximations via numerical integration.
arXiv Detail & Related papers (2021-01-04T18:09:25Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z) - Fast Approximate Multi-output Gaussian Processes [6.6174748514131165]
Training with the proposed approach requires computing only a $N times n$ eigenfunction matrix and a $n times n$ inverse where $n$ is a selected number of eigenvalues.
The proposed method can regress over multiple outputs, estimate the derivative of the regressor of any order, and learn the correlations between them.
arXiv Detail & Related papers (2020-08-22T14:34:45Z) - Sparse Gaussian Process Based On Hat Basis Functions [14.33021332215823]
We propose a new sparse Gaussian process method to solve the unconstrained regression problem.
The proposed method reduces the overall computational complexity from $O(n3)$ in exact Gaussian process to $O(nm2)$ with $m$ hat basis functions and $n$ training data points.
arXiv Detail & Related papers (2020-06-15T03:55:38Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.