Towards fast machine-learning-assisted Bayesian posterior inference of
realistic microseismic events
- URL: http://arxiv.org/abs/2101.04724v1
- Date: Tue, 12 Jan 2021 19:51:32 GMT
- Title: Towards fast machine-learning-assisted Bayesian posterior inference of
realistic microseismic events
- Authors: Davide Piras, Alessio Spurio Mancini, Benjamin Joachimi, Michael P.
Hobson
- Abstract summary: We train a machine learning algorithm on the power spectrum of the recorded pressure wave.
We show that our approach is computationally inexpensive, as it can be run in less than 1 hour on a commercial laptop.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian inference applied to microseismic activity monitoring allows for
principled estimation of the coordinates of microseismic events from recorded
seismograms, and their associated uncertainties. However, forward modelling of
these microseismic events, necessary to perform Bayesian source inversion, can
be prohibitively expensive in terms of computational resources. A viable
solution is to train a surrogate model based on machine learning techniques, to
emulate the forward model and thus accelerate Bayesian inference. In this
paper, we improve on previous work, which considered only sources with
isotropic moment tensor. We train a machine learning algorithm on the power
spectrum of the recorded pressure wave and show that the trained emulator
allows for the complete and fast retrieval of the event coordinates for
$\textit{any}$ source mechanism. Moreover, we show that our approach is
computationally inexpensive, as it can be run in less than 1 hour on a
commercial laptop, while yielding accurate results using less than $10^4$
training seismograms. We additionally demonstrate how the trained emulators can
be used to identify the source mechanism through the estimation of the Bayesian
evidence. This work lays the foundations for the efficient localisation and
characterisation of any recorded seismogram, thus helping to quantify human
impact on seismic activity and mitigate seismic hazard.
Related papers
- Rapid Bayesian identification of sparse nonlinear dynamics from scarce and noisy data [2.3018169548556977]
We recast the SINDy method within a Bayesian framework and use Gaussian approximations for the prior and likelihood to speed up computation.
The resulting method, Bayesian-SINDy, quantifies uncertainty in the parameters estimated but also is more robust when learning the correct model from limited and noisy data.
arXiv Detail & Related papers (2024-02-23T14:41:35Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Joint Microseismic Event Detection and Location with a Detection Transformer [8.505271826735118]
We propose an approach to unify event detection and source location into a single framework.
The proposed network is trained on synthetic data simulating multiple microseismic events corresponding to random source locations.
arXiv Detail & Related papers (2023-07-16T10:56:46Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Machine Learning Force Fields with Data Cost Aware Training [94.78998399180519]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation.
Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels.
We propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.
arXiv Detail & Related papers (2023-06-05T04:34:54Z) - Adversarial robustness of amortized Bayesian inference [3.308743964406687]
Amortized Bayesian inference is to initially invest computational cost in training an inference network on simulated data.
We show that almost unrecognizable, targeted perturbations of the observations can lead to drastic changes in the predicted posterior and highly unrealistic posterior predictive samples.
We propose a computationally efficient regularization scheme based on penalizing the Fisher information of the conditional density estimator.
arXiv Detail & Related papers (2023-05-24T10:18:45Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Demonstration-Efficient Guided Policy Search via Imitation of Robust
Tube MPC [36.3065978427856]
We propose a strategy to compress a computationally expensive Model Predictive Controller (MPC) into a more computationally efficient representation based on a deep neural network and Imitation Learning (IL)
By generating a Robust Tube variant (RTMPC) of the MPC and leveraging properties from the tube, we introduce a data augmentation method that enables high demonstration-efficiency.
Our method outperforms strategies commonly employed in IL, such as DAgger and Domain Randomization, in terms of demonstration-efficiency and robustness to perturbations unseen during training.
arXiv Detail & Related papers (2021-09-21T01:50:19Z) - Sample and Computation Redistribution for Efficient Face Detection [137.19388513633484]
Training data sampling and computation distribution strategies are the keys to efficient and accurate face detection.
scrfdf34 outperforms the best competitor, TinaFace, by $3.86%$ (AP at hard set) while being more than emph3$times$ faster on GPUs with VGA-resolution images.
arXiv Detail & Related papers (2021-05-10T23:51:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.