A Survey of Constrained Gaussian Process Regression: Approaches and
Implementation Challenges
- URL: http://arxiv.org/abs/2006.09319v3
- Date: Wed, 6 Jan 2021 17:45:06 GMT
- Title: A Survey of Constrained Gaussian Process Regression: Approaches and
Implementation Challenges
- Authors: Laura Swiler, Mamikon Gulian, Ari Frankel, Cosmin Safta, John Jakeman
- Abstract summary: We provide an overview of several classes of Gaussian process constraints, including positivity or bound constraints, monotonicity and convexity constraints, differential equation constraints, and boundary condition constraints.
We compare the strategies behind each approach as well as the differences in implementation, concluding with a discussion of the computational challenges introduced by constraints.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian process regression is a popular Bayesian framework for surrogate
modeling of expensive data sources. As part of a broader effort in scientific
machine learning, many recent works have incorporated physical constraints or
other a priori information within Gaussian process regression to supplement
limited data and regularize the behavior of the model. We provide an overview
and survey of several classes of Gaussian process constraints, including
positivity or bound constraints, monotonicity and convexity constraints,
differential equation constraints provided by linear PDEs, and boundary
condition constraints. We compare the strategies behind each approach as well
as the differences in implementation, concluding with a discussion of the
computational challenges introduced by constraints.
Related papers
- Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Online Constraint Tightening in Stochastic Model Predictive Control: A
Regression Approach [49.056933332667114]
No analytical solutions exist for chance-constrained optimal control problems.
We propose a data-driven approach for learning the constraint-tightening parameters online during control.
Our approach yields constraint-tightening parameters that tightly satisfy the chance constraints.
arXiv Detail & Related papers (2023-10-04T16:22:02Z) - Multi-Response Heteroscedastic Gaussian Process Models and Their
Inference [1.52292571922932]
We propose a novel framework for the modeling of heteroscedastic covariance functions.
We employ variational inference to approximate the posterior and facilitate posterior predictive modeling.
We show that our proposed framework offers a robust and versatile tool for a wide array of applications.
arXiv Detail & Related papers (2023-08-29T15:06:47Z) - Tightening Discretization-based MILP Models for the Pooling Problem
using Upper Bounds on Bilinear Terms [2.6253445491808307]
Discretization-based methods have been proposed for solving non optimization problems with bi-linear terms.
This paper shows that discretization-based MILP models can be used to solve the pooling problem.
arXiv Detail & Related papers (2022-07-08T05:28:59Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Efficient methods for Gaussian Markov random fields under sparse linear
constraints [2.741266294612776]
Methods for inference and simulation of linearly constrained Gaussian Markov Random Fields (GMRF) are computationally prohibitive when the number of constraints is large.
We propose a new class of methods to overcome these challenges in the common case of sparse constraints.
arXiv Detail & Related papers (2021-06-03T09:31:12Z) - Gaussian Process Regression constrained by Boundary Value Problems [0.0]
We develop a framework for Gaussian processes regression constrained by boundary value problems.
The framework combines co-kriging with the linear transformation of a Gaussian process together with the use of kernels given by spectral expansions in eigenfunctions of the boundary value problem.
We demonstrate that the resulting framework yields more accurate and stable solution inference as compared to physics-informed Gaussian process regression.
arXiv Detail & Related papers (2020-12-22T06:55:15Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.