Efficient methods for Gaussian Markov random fields under sparse linear
constraints
- URL: http://arxiv.org/abs/2106.01712v1
- Date: Thu, 3 Jun 2021 09:31:12 GMT
- Title: Efficient methods for Gaussian Markov random fields under sparse linear
constraints
- Authors: David Bolin and Jonas Wallin
- Abstract summary: Methods for inference and simulation of linearly constrained Gaussian Markov Random Fields (GMRF) are computationally prohibitive when the number of constraints is large.
We propose a new class of methods to overcome these challenges in the common case of sparse constraints.
- Score: 2.741266294612776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Methods for inference and simulation of linearly constrained Gaussian Markov
Random Fields (GMRF) are computationally prohibitive when the number of
constraints is large. In some cases, such as for intrinsic GMRFs, they may even
be unfeasible. We propose a new class of methods to overcome these challenges
in the common case of sparse constraints, where one has a large number of
constraints and each only involves a few elements. Our methods rely on a basis
transformation into blocks of constrained versus non-constrained subspaces, and
we show that the methods greatly outperform existing alternatives in terms of
computational cost. By combining the proposed methods with the stochastic
partial differential equation approach for Gaussian random fields, we also show
how to formulate Gaussian process regression with linear constraints in a GMRF
setting to reduce computational cost. This is illustrated in two applications
with simulated data.
Related papers
- Maximum a Posteriori Inference for Factor Graphs via Benders' Decomposition [0.38233569758620056]
We present a method for maximum a-posteriori inference in general Bayesian factor models.
We derive MAP estimation algorithms for the Bayesian Gaussian mixture model and latent Dirichlet allocation.
arXiv Detail & Related papers (2024-10-24T19:57:56Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - A New Computationally Simple Approach for Implementing Neural Networks
with Output Hard Constraints [5.482532589225552]
A new method of imposing hard convex constraints on the neural network output values is proposed.
The mapping is implemented by the additional neural network layer with constraints for output.
The proposed method is simply extended to the case when constraints are imposed not only on the output vectors, but also on joint constraints depending on inputs.
arXiv Detail & Related papers (2023-07-19T21:06:43Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Reduction of the Number of Variables in Parametric Constrained
Least-Squares Problems [0.20305676256390928]
This paper proposes techniques for reducing the number of involved optimization variables.
We show the good performance of the proposed techniques in numerical tests and in a linearized MPC problem of a nonlinear benchmark process.
arXiv Detail & Related papers (2020-12-18T18:26:40Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - A Survey of Constrained Gaussian Process Regression: Approaches and
Implementation Challenges [0.0]
We provide an overview of several classes of Gaussian process constraints, including positivity or bound constraints, monotonicity and convexity constraints, differential equation constraints, and boundary condition constraints.
We compare the strategies behind each approach as well as the differences in implementation, concluding with a discussion of the computational challenges introduced by constraints.
arXiv Detail & Related papers (2020-06-16T17:03:36Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Sparse Orthogonal Variational Inference for Gaussian Processes [34.476453597078894]
We introduce a new interpretation of sparse variational approximations for Gaussian processes using inducing points.
We show that this formulation recovers existing approximations and at the same time allows to obtain tighter lower bounds on the marginal likelihood and new variational inference algorithms.
arXiv Detail & Related papers (2019-10-23T15:01:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.