Sparse Spectrum Warped Input Measures for Nonstationary Kernel Learning
- URL: http://arxiv.org/abs/2010.04315v1
- Date: Fri, 9 Oct 2020 01:10:08 GMT
- Title: Sparse Spectrum Warped Input Measures for Nonstationary Kernel Learning
- Authors: Anthony Tompkins, Rafael Oliveira, Fabio Ramos
- Abstract summary: We propose a general form of explicit, input-dependent, measure-valued warpings for learning nonstationary kernels.
The proposed learning algorithm warps inputs as conditional Gaussian measures that control the smoothness of a standard stationary kernel.
We demonstrate a remarkable efficiency in the number of parameters of the warping functions in learning problems with both small and large data regimes.
- Score: 29.221457769884648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We establish a general form of explicit, input-dependent, measure-valued
warpings for learning nonstationary kernels. While stationary kernels are
ubiquitous and simple to use, they struggle to adapt to functions that vary in
smoothness with respect to the input. The proposed learning algorithm warps
inputs as conditional Gaussian measures that control the smoothness of a
standard stationary kernel. This construction allows us to capture
non-stationary patterns in the data and provides intuitive inductive bias. The
resulting method is based on sparse spectrum Gaussian processes, enabling
closed-form solutions, and is extensible to a stacked construction to capture
more complex patterns. The method is extensively validated alongside related
algorithms on synthetic and real world datasets. We demonstrate a remarkable
efficiency in the number of parameters of the warping functions in learning
problems with both small and large data regimes.
Related papers
- Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Manifold Learning with Sparse Regularised Optimal Transport [0.17205106391379024]
Real-world datasets are subject to noisy observations and sampling, so that distilling information about the underlying manifold is a major challenge.
We propose a method for manifold learning that utilises a symmetric version of optimal transport with a quadratic regularisation.
We prove that the resulting kernel is consistent with a Laplace-type operator in the continuous limit, establish robustness to heteroskedastic noise and exhibit these results in simulations.
arXiv Detail & Related papers (2023-07-19T08:05:46Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Learning Globally Smooth Functions on Manifolds [94.22412028413102]
Learning smooth functions is generally challenging, except in simple cases such as learning linear or kernel models.
This work proposes to overcome these obstacles by combining techniques from semi-infinite constrained learning and manifold regularization.
We prove that, under mild conditions, this method estimates the Lipschitz constant of the solution, learning a globally smooth solution as a byproduct.
arXiv Detail & Related papers (2022-10-01T15:45:35Z) - Nonparametric learning of kernels in nonlocal operators [6.314604944530131]
We provide a rigorous identifiability analysis and convergence study for the learning of kernels in nonlocal operators.
We propose a nonparametric regression algorithm with a novel data adaptive RKHS Tikhonov regularization method based on the function space of identifiability.
arXiv Detail & Related papers (2022-05-23T02:47:55Z) - Linear Time Kernel Matrix Approximation via Hyperspherical Harmonics [3.24890820102255]
We propose a new technique for constructing low-rank approximations of matrices that arise in kernel methods for machine learning.
Our approach pairs a novel automatically constructed analytic expansion of the underlying kernel function with a data-dependent compression step to further optimize the approximation.
Experimental results show our approach compares favorably to the commonly used Nystrom method with respect to both accuracy for a given rank and computational time for a given accuracy across a variety of kernels, dimensions, and datasets.
arXiv Detail & Related papers (2022-02-08T05:19:39Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Overcoming the curse of dimensionality with Laplacian regularization in
semi-supervised learning [80.20302993614594]
We provide a statistical analysis to overcome drawbacks of Laplacian regularization.
We unveil a large body of spectral filtering methods that exhibit desirable behaviors.
We provide realistic computational guidelines in order to make our method usable with large amounts of data.
arXiv Detail & Related papers (2020-09-09T14:28:54Z) - Randomly Projected Additive Gaussian Processes for Regression [37.367935314532154]
We use additive sums of kernels for GP regression, where each kernel operates on a different random projection of its inputs.
We prove this convergence and its rate, and propose a deterministic approach that converges more quickly than purely random projections.
arXiv Detail & Related papers (2019-12-30T07:26:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.