Linearly Constrained Gaussian Processes with Boundary Conditions
- URL: http://arxiv.org/abs/2002.00818v3
- Date: Mon, 15 Feb 2021 11:34:34 GMT
- Title: Linearly Constrained Gaussian Processes with Boundary Conditions
- Authors: Markus Lange-Hegermann
- Abstract summary: We consider prior knowledge from systems of linear partial differential equations together with their boundary conditions.
We construct multi-output Gaussian process priors with realizations in the solution set of such systems.
- Score: 5.33024001730262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One goal in Bayesian machine learning is to encode prior knowledge into prior
distributions, to model data efficiently. We consider prior knowledge from
systems of linear partial differential equations together with their boundary
conditions. We construct multi-output Gaussian process priors with realizations
in the solution set of such systems, in particular only such solutions can be
represented by Gaussian process regression. The construction is fully
algorithmic via Gr\"obner bases and it does not employ any approximation. It
builds these priors combining two parametrizations via a pullback: the first
parametrizes the solutions for the system of differential equations and the
second parametrizes all functions adhering to the boundary conditions.
Related papers
- Gaussian Process Priors for Boundary Value Problems of Linear Partial Differential Equations [3.524869467682149]
Solving systems of partial differential equations (PDEs) is a fundamental task in computational science.
Recent advancements have introduced neural operators and physics-informed neural networks (PINNs) to tackle PDEs.
We propose a novel framework for constructing GP priors that satisfy both general systems of linear PDEs with constant coefficients and linear boundary conditions.
arXiv Detail & Related papers (2024-11-25T18:48:15Z) - Augmented neural forms with parametric boundary-matching operators for solving ordinary differential equations [0.0]
This paper introduces a formalism for systematically crafting proper neural forms with boundary matches that are amenable to optimization.
It describes a novel technique for converting problems with Neumann or Robin conditions into equivalent problems with parametric Dirichlet conditions.
The proposed augmented neural forms approach was tested on a set of diverse problems, encompassing first- and second-order ordinary differential equations, as well as first-order systems.
arXiv Detail & Related papers (2024-04-30T11:10:34Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Gaussian Process Priors for Systems of Linear Partial Differential
Equations with Constant Coefficients [4.327763441385371]
Partial differential equations (PDEs) are important tools to model physical systems.
We propose a family of Gaussian process (GP) priors, which we call EPGP, such that all realizations are exact solutions of this system.
We demonstrate our approach on three families of systems of PDEs, the heat equation, wave equation, and Maxwell's equations.
arXiv Detail & Related papers (2022-12-29T14:28:32Z) - AI-enhanced iterative solvers for accelerating the solution of large
scale parametrized linear systems of equations [0.0]
This paper exploits up-to-date ML tools and delivers customized iterative solvers of linear equation systems.
The results indicate its superiority over conventional iterative solution schemes.
arXiv Detail & Related papers (2022-07-06T09:47:14Z) - Gaussian Processes and Statistical Decision-making in Non-Euclidean
Spaces [96.53463532832939]
We develop techniques for broadening the applicability of Gaussian processes.
We introduce a wide class of efficient approximations built from this viewpoint.
We develop a collection of Gaussian process models over non-Euclidean spaces.
arXiv Detail & Related papers (2022-02-22T01:42:57Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Numerical Solution of Stiff Ordinary Differential Equations with Random
Projection Neural Networks [0.0]
We propose a numerical scheme based on Random Projection Neural Networks (RPNN) for the solution of Ordinary Differential Equations (ODEs)
We show that our proposed scheme yields good numerical approximation accuracy without being affected by the stiffness, thus outperforming in same cases the textttode45 and textttode15s functions.
arXiv Detail & Related papers (2021-08-03T15:49:17Z) - Agnostic Proper Learning of Halfspaces under Gaussian Marginals [56.01192577666607]
We study the problem of agnostically learning halfspaces under the Gaussian.
Our main result is the em first proper learning algorithm for this problem.
arXiv Detail & Related papers (2021-02-10T18:40:44Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.