Uncertainty Quantification and Experimental Design for large-scale
linear Inverse Problems under Gaussian Process Priors
- URL: http://arxiv.org/abs/2109.03457v1
- Date: Wed, 8 Sep 2021 06:54:32 GMT
- Title: Uncertainty Quantification and Experimental Design for large-scale
linear Inverse Problems under Gaussian Process Priors
- Authors: C\'edric Travelletti, David Ginsbourger and Niklas Linde
- Abstract summary: We show that in inverse problems involving integral operators, one faces additional difficulties that hinder inversion on large grids.
We introduce an implicit representation of posterior covariance matrices that reduces the memory footprint.
We demonstrate our approach by computing sequential data collection plans for excursion set recovery for a gravimetric inverse problem.
- Score: 0.6445605125467573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the use of Gaussian process (GP) priors for solving inverse
problems in a Bayesian framework. As is well known, the computational
complexity of GPs scales cubically in the number of datapoints. We here show
that in the context of inverse problems involving integral operators, one faces
additional difficulties that hinder inversion on large grids. Furthermore, in
that context, covariance matrices can become too large to be stored. By
leveraging results about sequential disintegrations of Gaussian measures, we
are able to introduce an implicit representation of posterior covariance
matrices that reduces the memory footprint by only storing low rank
intermediate matrices, while allowing individual elements to be accessed
on-the-fly without needing to build full posterior covariance matrices.
Moreover, it allows for fast sequential inclusion of new observations. These
features are crucial when considering sequential experimental design tasks. We
demonstrate our approach by computing sequential data collection plans for
excursion set recovery for a gravimetric inverse problem, where the goal is to
provide fine resolution estimates of high density regions inside the Stromboli
volcano, Italy. Sequential data collection plans are computed by extending the
weighted integrated variance reduction (wIVR) criterion to inverse problems.
Our results show that this criterion is able to significantly reduce the
uncertainty on the excursion volume, reaching close to minimal levels of
residual uncertainty. Overall, our techniques allow the advantages of
probabilistic models to be brought to bear on large-scale inverse problems
arising in the natural sciences.
Related papers
- Novel Pivoted Cholesky Decompositions for Efficient Gaussian Process Inference [2.8391355909797644]
Cholesky decomposition is a fundamental tool for solving linear systems with symmetric and positive definite matrices.<n>We introduce a pivoting strategy that iteratively permutes the rows and columns of the matrix.<n>Our results show that the proposed selection strategies are either on par or, in most cases, outperform traditional baselines.
arXiv Detail & Related papers (2025-07-28T10:01:43Z) - Solving Inverse Problems via Diffusion Optimal Control [3.0079490585515343]
We derive a diffusion-based optimal controller inspired by the iterative Linear Quadratic Regulator (iLQR) algorithm.
We show that the idealized posterior sampling equation can be recovered as a special case of our algorithm.
We then evaluate our method against a selection of neural inverse problem solvers, and establish a new baseline in image reconstruction with inverse problems.
arXiv Detail & Related papers (2024-12-21T19:47:06Z) - Refined Risk Bounds for Unbounded Losses via Transductive Priors [58.967816314671296]
We revisit the sequential variants of linear regression with the squared loss, classification problems with hinge loss, and logistic regression.
Our key tools are based on the exponential weights algorithm with carefully chosen transductive priors.
arXiv Detail & Related papers (2024-10-29T00:01:04Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Learning a Gaussian Mixture for Sparsity Regularization in Inverse
Problems [2.375943263571389]
In inverse problems, the incorporation of a sparsity prior yields a regularization effect on the solution.
We propose a probabilistic sparsity prior formulated as a mixture of Gaussians, capable of modeling sparsity with respect to a generic basis.
We put forth both a supervised and an unsupervised training strategy to estimate the parameters of this network.
arXiv Detail & Related papers (2024-01-29T22:52:57Z) - Randomized Physics-Informed Machine Learning for Uncertainty
Quantification in High-Dimensional Inverse Problems [49.1574468325115]
We propose a physics-informed machine learning method for uncertainty quantification in high-dimensional inverse problems.
We show analytically and through comparison with Hamiltonian Monte Carlo that the rPICKLE posterior converges to the true posterior given by the Bayes rule.
arXiv Detail & Related papers (2023-12-11T07:33:16Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Multistage Stochastic Optimization via Kernels [3.7565501074323224]
We develop a non-parametric, data-driven, tractable approach for solving multistage optimization problems.
We show that the proposed method produces decision rules with near-optimal average performance.
arXiv Detail & Related papers (2023-03-11T23:19:32Z) - Global Convergence of Sub-gradient Method for Robust Matrix Recovery:
Small Initialization, Noisy Measurements, and Over-parameterization [4.7464518249313805]
Sub-gradient method (SubGM) is used to recover a low-rank matrix from a limited number of measurements.
We show that SubGM converges to the true solution, even under arbitrarily large and arbitrarily dense noise values.
arXiv Detail & Related papers (2022-02-17T17:50:04Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Total Deep Variation: A Stable Regularizer for Inverse Problems [71.90933869570914]
We introduce the data-driven general-purpose total deep variation regularizer.
In its core, a convolutional neural network extracts local features on multiple scales and in successive blocks.
We achieve state-of-the-art results for numerous imaging tasks.
arXiv Detail & Related papers (2020-06-15T21:54:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.