Compressed particle methods for expensive models with application in
Astronomy and Remote Sensing
- URL: http://arxiv.org/abs/2107.08465v1
- Date: Sun, 18 Jul 2021 14:45:23 GMT
- Title: Compressed particle methods for expensive models with application in
Astronomy and Remote Sensing
- Authors: Luca Martino, V\'ictor Elvira, Javier L\'opez-Santiago, Gustau
Camps-Valls
- Abstract summary: We introduce a novel approach where the expensive model is evaluated only in some well-chosen samples.
We provide theoretical results supporting the novel algorithms and give empirical evidence of the performance of the proposed method in several numerical experiments.
Two of them are real-world applications in astronomy and satellite remote sensing.
- Score: 15.874578163779047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many inference problems, the evaluation of complex and costly models is
often required. In this context, Bayesian methods have become very popular in
several fields over the last years, in order to obtain parameter inversion,
model selection or uncertainty quantification. Bayesian inference requires the
approximation of complicated integrals involving (often costly) posterior
distributions. Generally, this approximation is obtained by means of Monte
Carlo (MC) methods. In order to reduce the computational cost of the
corresponding technique, surrogate models (also called emulators) are often
employed. Another alternative approach is the so-called Approximate Bayesian
Computation (ABC) scheme. ABC does not require the evaluation of the costly
model but the ability to simulate artificial data according to that model.
Moreover, in ABC, the choice of a suitable distance between real and artificial
data is also required. In this work, we introduce a novel approach where the
expensive model is evaluated only in some well-chosen samples. The selection of
these nodes is based on the so-called compressed Monte Carlo (CMC) scheme. We
provide theoretical results supporting the novel algorithms and give empirical
evidence of the performance of the proposed method in several numerical
experiments. Two of them are real-world applications in astronomy and satellite
remote sensing.
Related papers
- Scalable Inference for Bayesian Multinomial Logistic-Normal Dynamic Linear Models [0.5735035463793009]
This article develops an efficient and accurate approach to posterior state estimation, called $textitFenrir$.
Our experiments suggest that Fenrir can be three orders of magnitude more efficient than Stan.
Our methods are made available to the community as a user-friendly software library written in C++ with an R interface.
arXiv Detail & Related papers (2024-10-07T23:20:14Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Sparse Bayesian Learning for Complex-Valued Rational Approximations [0.03392423750246091]
Surrogate models are used to alleviate the computational burden in engineering tasks.
These models show a strongly non-linear dependence on their input parameters.
We apply a sparse learning approach to the rational approximation.
arXiv Detail & Related papers (2022-06-06T12:06:13Z) - Bayesian Target-Vector Optimization for Efficient Parameter
Reconstruction [0.0]
We introduce a target-vector optimization scheme that considers all $K$ contributions of the model function and that is specifically suited for parameter reconstruction problems.
It also enables to determine accurate uncertainty estimates with very few observations of the actual model function.
arXiv Detail & Related papers (2022-02-23T15:13:32Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Residual Overfit Method of Exploration [78.07532520582313]
We propose an approximate exploration methodology based on fitting only two point estimates, one tuned and one overfit.
The approach drives exploration towards actions where the overfit model exhibits the most overfitting compared to the tuned model.
We compare ROME against a set of established contextual bandit methods on three datasets and find it to be one of the best performing.
arXiv Detail & Related papers (2021-10-06T17:05:33Z) - Finding Geometric Models by Clustering in the Consensus Space [61.65661010039768]
We propose a new algorithm for finding an unknown number of geometric models, e.g., homographies.
We present a number of applications where the use of multiple geometric models improves accuracy.
These include pose estimation from multiple generalized homographies; trajectory estimation of fast-moving objects.
arXiv Detail & Related papers (2021-03-25T14:35:07Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Detangling robustness in high dimensions: composite versus
model-averaged estimation [11.658462692891355]
Robust methods, though ubiquitous in practice, are yet to be fully understood in the context of regularized estimation and high dimensions.
This paper provides a toolbox to further study robustness in these settings and focuses on prediction.
arXiv Detail & Related papers (2020-06-12T20:40:15Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.