Bayesian full waveform inversion with sequential surrogate model refinement
- URL: http://arxiv.org/abs/2505.03246v1
- Date: Tue, 06 May 2025 07:17:03 GMT
- Title: Bayesian full waveform inversion with sequential surrogate model refinement
- Authors: Giovanni Angelo Meles, Stefano Marelli, Niklas Linde,
- Abstract summary: Markov chain Monte Carlo (MCMC) methods sample posterior probability density functions.<n> Dimensionality-reduction methods can help define the prior and train surrogate models.<n>We propose an iterative method that progressively refines the surrogate model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian formulations of inverse problems are attractive for their ability to incorporate prior knowledge and update probabilistic models as new data become available. Markov chain Monte Carlo (MCMC) methods sample posterior probability density functions (pdfs) but require accurate prior models and many likelihood evaluations. Dimensionality-reduction methods, such as principal component analysis (PCA), can help define the prior and train surrogate models that efficiently approximate costly forward solvers. However, for problems like full waveform inversion, the complex input/output relations often cannot be captured well by surrogate models trained only on prior samples, leading to biased results. Including samples from high-posterior-probability regions can improve accuracy, but these regions are hard to identify in advance. We propose an iterative method that progressively refines the surrogate model. Starting with low-frequency data, we train an initial surrogate and perform an MCMC inversion. The resulting posterior samples are then used to retrain the surrogate, allowing us to expand the frequency bandwidth in the next inversion step. Repeating this process reduces model errors and improves the surrogate's accuracy over the relevant input domain. Ultimately, we obtain a highly accurate surrogate across the full bandwidth, enabling a final MCMC inversion. Numerical results from 2D synthetic crosshole Ground Penetrating Radar (GPR) examples show that our method outperforms ray-based approaches and those relying solely on prior sampling. The overall computational cost is reduced by about two orders of magnitude compared to full finite-difference time-domain modeling.
Related papers
- Flow Matching based Sequential Recommender Model [54.815225661065924]
This study introduces FMRec, a Flow Matching based model that employs a straight flow trajectory and a modified loss tailored for the recommendation task.<n>FMRec achieves an average improvement of 6.53% over state-of-the-art methods.
arXiv Detail & Related papers (2025-05-22T06:53:03Z) - Self-Boost via Optimal Retraining: An Analysis via Approximate Message Passing [58.52119063742121]
Retraining a model using its own predictions together with the original, potentially noisy labels is a well-known strategy for improving the model performance.<n>This paper addresses the question of how to optimally combine the model's predictions and the provided labels.<n>Our main contribution is the derivation of the Bayes optimal aggregator function to combine the current model's predictions and the given labels.
arXiv Detail & Related papers (2025-05-21T07:16:44Z) - Distributional Diffusion Models with Scoring Rules [83.38210785728994]
Diffusion models generate high-quality synthetic data.<n> generating high-quality outputs requires many discretization steps.<n>We propose to accomplish sample generation by learning the posterior em distribution of clean data samples.
arXiv Detail & Related papers (2025-02-04T16:59:03Z) - Enhancing Diffusion Models for Inverse Problems with Covariance-Aware Posterior Sampling [3.866047645663101]
In computer vision, for example, tasks such as inpainting, deblurring, and super resolution can be effectively modeled as inverse problems.<n>DDPMs are shown to provide a promising solution to noisy linear inverse problems without the need for additional task specific training.
arXiv Detail & Related papers (2024-12-28T06:17:44Z) - Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
Amortized Posterior Sampling is a novel variational inference approach for efficient posterior sampling in inverse problems.<n>Our method trains a conditional flow model to minimize the divergence between the variational distribution and the posterior distribution implicitly defined by the diffusion model.<n>Unlike existing methods, our approach is unsupervised, requires no paired training data, and is applicable to both Euclidean and non-Euclidean domains.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - EM Distillation for One-step Diffusion Models [65.57766773137068]
We propose a maximum likelihood-based approach that distills a diffusion model to a one-step generator model with minimal loss of quality.<n>We develop a reparametrized sampling scheme and a noise cancellation technique that together stabilizes the distillation process.
arXiv Detail & Related papers (2024-05-27T05:55:22Z) - Scalable diffusion posterior sampling in infinite-dimensional inverse problems [5.340736751238338]
We propose a scalable diffusion posterior sampling (SDPS) method to bypass forward mapping evaluations during sampling.<n>The approach is shown to generalize to infinite-dimensional diffusion models and is validated through rigorous convergence analysis and high-dimensional CT imaging experiments.
arXiv Detail & Related papers (2024-05-24T15:33:27Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Adaptive Multi-step Refinement Network for Robust Point Cloud Registration [82.64560249066734]
Point Cloud Registration estimates the relative rigid transformation between two point clouds of the same scene.<n>We propose an adaptive multi-step refinement network that refines the registration quality at each step by leveraging the information from the preceding step.<n>Our method achieves state-of-the-art performance on both the 3DMatch/3DLoMatch and KITTI benchmarks.
arXiv Detail & Related papers (2023-12-05T18:59:41Z) - Efficient Learning of Accurate Surrogates for Simulations of Complex Systems [0.0]
We introduce an online learning method empowered by sampling-driven sampling.
It ensures that all turning points on the model response surface are included in the training data.
We apply our method to simulations of nuclear matter to demonstrate that highly accurate surrogates can be reliably auto-generated.
arXiv Detail & Related papers (2022-07-11T20:51:11Z) - Sample-Efficient Optimisation with Probabilistic Transformer Surrogates [66.98962321504085]
This paper investigates the feasibility of employing state-of-the-art probabilistic transformers in Bayesian optimisation.
We observe two drawbacks stemming from their training procedure and loss definition, hindering their direct deployment as proxies in black-box optimisation.
We introduce two components: 1) a BO-tailored training prior supporting non-uniformly distributed points, and 2) a novel approximate posterior regulariser trading-off accuracy and input sensitivity to filter favourable stationary points for improved predictive performance.
arXiv Detail & Related papers (2022-05-27T11:13:17Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Reducing the Amortization Gap in Variational Autoencoders: A Bayesian
Random Function Approach [38.45568741734893]
Inference in our GP model is done by a single feed forward pass through the network, significantly faster than semi-amortized methods.
We show that our approach attains higher test data likelihood than the state-of-the-arts on several benchmark datasets.
arXiv Detail & Related papers (2021-02-05T13:01:12Z) - Quantifying the Uncertainty in Model Parameters Using Gaussian
Process-Based Markov Chain Monte Carlo: An Application to Cardiac
Electrophysiological Models [7.8316005711996235]
Estimates of patient-specific model parameters are important for personalized modeling.
Standard Markov Chain Monte Carlo sampling requires repeated model simulations that are computationally infeasible.
A common solution is to replace the simulation model with a computationally-efficient surrogate for a faster sampling.
arXiv Detail & Related papers (2020-06-02T23:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.