Trigonometric Quadrature Fourier Features for Scalable Gaussian Process
Regression
- URL: http://arxiv.org/abs/2310.14544v1
- Date: Mon, 23 Oct 2023 03:53:09 GMT
- Title: Trigonometric Quadrature Fourier Features for Scalable Gaussian Process
Regression
- Authors: Kevin Li, Max Balakirsky, Simon Mak
- Abstract summary: Quadrature Fourier Features (QFF) have gained popularity in recent years due to their improved approximation accuracy and better calibrated uncertainty estimates.
A key limitation of QFF is that its performance can suffer from well-known pathologies related to highly oscillatory quadrature.
We address this critical issue via a new Trigonometric Quadrature Fourier Feature (TQFF) method, which uses a novel non-Gaussian quadrature rule.
- Score: 3.577968559443225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fourier feature approximations have been successfully applied in the
literature for scalable Gaussian Process (GP) regression. In particular,
Quadrature Fourier Features (QFF) derived from Gaussian quadrature rules have
gained popularity in recent years due to their improved approximation accuracy
and better calibrated uncertainty estimates compared to Random Fourier Feature
(RFF) methods. However, a key limitation of QFF is that its performance can
suffer from well-known pathologies related to highly oscillatory quadrature,
resulting in mediocre approximation with limited features. We address this
critical issue via a new Trigonometric Quadrature Fourier Feature (TQFF)
method, which uses a novel non-Gaussian quadrature rule specifically tailored
for the desired Fourier transform. We derive an exact quadrature rule for TQFF,
along with kernel approximation error bounds for the resulting feature map. We
then demonstrate the improved performance of our method over RFF and Gaussian
QFF in a suite of numerical experiments and applications, and show the TQFF
enjoys accurate GP approximations over a broad range of length-scales using
fewer features.
Related papers
- On the design of scalable, high-precision spherical-radial Fourier features [5.216151302783165]
We introduce a new family of quadrature rules that accurately approximate the Gaussian measure in higher dimensions by exploiting its isotropy.
Compared to previous work, our approach leverages a thorough analysis of the approximation error, which suggests natural choices for both the radial and spherical components.
arXiv Detail & Related papers (2024-08-23T17:11:25Z) - Data-Driven Filter Design in FBP: Transforming CT Reconstruction with Trainable Fourier Series [3.6508148866314163]
We introduce a trainable filter for computed tomography (CT) reconstruction within the filtered backprojection (FBP) framework.
This method overcomes the limitation in noise reduction by optimizing Fourier series coefficients to construct the filter.
Our filter can be easily integrated into existing CT reconstruction models, making it an adaptable tool for a wide range of practical applications.
arXiv Detail & Related papers (2024-01-29T10:47:37Z) - Neural Fields with Thermal Activations for Arbitrary-Scale Super-Resolution [56.089473862929886]
We present a novel way to design neural fields such that points can be queried with an adaptive Gaussian PSF.
With its theoretically guaranteed anti-aliasing, our method sets a new state of the art for arbitrary-scale single image super-resolution.
arXiv Detail & Related papers (2023-11-29T14:01:28Z) - Fourier Continuation for Exact Derivative Computation in
Physics-Informed Neural Operators [53.087564562565774]
PINO is a machine learning architecture that has shown promising empirical results for learning partial differential equations.
We present an architecture that leverages Fourier continuation (FC) to apply the exact gradient method to PINO for nonperiodic problems.
arXiv Detail & Related papers (2022-11-29T06:37:54Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Sigma-Delta and Distributed Noise-Shaping Quantization Methods for
Random Fourier Features [73.25551965751603]
We prove that our quantized RFFs allow a high accuracy approximation of the underlying kernels.
We show that the quantized RFFs can be further compressed, yielding an excellent trade-off between memory use and accuracy.
We empirically show by testing the performance of our methods on several machine learning tasks that our method compares favorably to other state of the art quantization methods in this context.
arXiv Detail & Related papers (2021-06-04T17:24:47Z) - Bias-Free Scalable Gaussian Processes via Randomized Truncations [26.985324213848475]
This paper analyzes two common techniques: early truncated conjugate gradients (CG) and random Fourier features (RFF)
CG tends to underfit while RFF tends to overfit.
We address these issues using randomized truncation estimators that eliminate bias in exchange for increased variance.
arXiv Detail & Related papers (2021-02-12T18:54:10Z) - Learning Set Functions that are Sparse in Non-Orthogonal Fourier Bases [73.53227696624306]
We present a new family of algorithms for learning Fourier-sparse set functions.
In contrast to other work that focused on the Walsh-Hadamard transform, our novel algorithms operate with recently introduced non-orthogonal Fourier transforms.
We demonstrate effectiveness on several real-world applications.
arXiv Detail & Related papers (2020-10-01T14:31:59Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.