Gaussian Process Regression constrained by Boundary Value Problems
- URL: http://arxiv.org/abs/2012.11857v1
- Date: Tue, 22 Dec 2020 06:55:15 GMT
- Title: Gaussian Process Regression constrained by Boundary Value Problems
- Authors: Mamikon Gulian, Ari Frankel, Laura Swiler
- Abstract summary: We develop a framework for Gaussian processes regression constrained by boundary value problems.
The framework combines co-kriging with the linear transformation of a Gaussian process together with the use of kernels given by spectral expansions in eigenfunctions of the boundary value problem.
We demonstrate that the resulting framework yields more accurate and stable solution inference as compared to physics-informed Gaussian process regression.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a framework for Gaussian processes regression constrained by
boundary value problems. The framework may be applied to infer the solution of
a well-posed boundary value problem with a known second-order differential
operator and boundary conditions, but for which only scattered observations of
the source term are available. Scattered observations of the solution may also
be used in the regression. The framework combines co-kriging with the linear
transformation of a Gaussian process together with the use of kernels given by
spectral expansions in eigenfunctions of the boundary value problem. Thus, it
benefits from a reduced-rank property of covariance matrices. We demonstrate
that the resulting framework yields more accurate and stable solution inference
as compared to physics-informed Gaussian process regression without boundary
condition constraints.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - On Asymptotic Linear Convergence of Projected Gradient Descent for
Constrained Least Squares [22.851500417035947]
This manuscript presents a unified framework for the analysis of projected gradient descent in the context of constrained least squares.
We present a recipe for the convergence analysis of PGD and demonstrate it via a beginning-to-end application of the recipe on four fundamental problems.
arXiv Detail & Related papers (2021-12-22T09:49:51Z) - Stochastic Online Linear Regression: the Forward Algorithm to Replace
Ridge [24.880035784304834]
We derive high probability regret bounds for online ridge regression and the forward algorithm.
This enables us to compare online regression algorithms more accurately and eliminate assumptions of bounded observations and predictions.
arXiv Detail & Related papers (2021-11-02T13:57:53Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Tighter Bounds on the Log Marginal Likelihood of Gaussian Process
Regression Using Conjugate Gradients [19.772149500352945]
We show that approximate maximum likelihood learning of model parameters by maximising our lower bound retains many of the sparse variational approach benefits.
In experiments, we show improved predictive performance with our model for a comparable amount of training time compared to other conjugate gradient based approaches.
arXiv Detail & Related papers (2021-02-16T17:54:59Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - A Survey of Constrained Gaussian Process Regression: Approaches and
Implementation Challenges [0.0]
We provide an overview of several classes of Gaussian process constraints, including positivity or bound constraints, monotonicity and convexity constraints, differential equation constraints, and boundary condition constraints.
We compare the strategies behind each approach as well as the differences in implementation, concluding with a discussion of the computational challenges introduced by constraints.
arXiv Detail & Related papers (2020-06-16T17:03:36Z) - Linearly Constrained Gaussian Processes with Boundary Conditions [5.33024001730262]
We consider prior knowledge from systems of linear partial differential equations together with their boundary conditions.
We construct multi-output Gaussian process priors with realizations in the solution set of such systems.
arXiv Detail & Related papers (2020-02-03T15:19:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.