Bayesian model calibration for block copolymer self-assembly:
Likelihood-free inference and expected information gain computation via
measure transport
- URL: http://arxiv.org/abs/2206.11343v1
- Date: Wed, 22 Jun 2022 19:38:52 GMT
- Title: Bayesian model calibration for block copolymer self-assembly:
Likelihood-free inference and expected information gain computation via
measure transport
- Authors: Ricardo Baptista, Lianghao Cao, Joshua Chen, Omar Ghattas, Fengyi Li,
Youssef M. Marzouk, J. Tinsley Oden
- Abstract summary: We consider the calibration of models describing the phenomenon of block copolymer (BCP) self-assembly.
We tackle this challenging Bayesian inference problem using a likelihood-free approach based on measure transport.
We present a numerical case study based on the Ohta--Kawasaki model for diblock copolymer thin film self-assembly.
- Score: 6.496038875667294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the Bayesian calibration of models describing the phenomenon of
block copolymer (BCP) self-assembly using image data produced by microscopy or
X-ray scattering techniques. To account for the random long-range disorder in
BCP equilibrium structures, we introduce auxiliary variables to represent this
aleatory uncertainty. These variables, however, result in an integrated
likelihood for high-dimensional image data that is generally intractable to
evaluate. We tackle this challenging Bayesian inference problem using a
likelihood-free approach based on measure transport together with the
construction of summary statistics for the image data. We also show that
expected information gains (EIGs) from the observed data about the model
parameters can be computed with no significant additional cost. Lastly, we
present a numerical case study based on the Ohta--Kawasaki model for diblock
copolymer thin film self-assembly and top-down microscopy characterization. For
calibration, we introduce several domain-specific energy- and Fourier-based
summary statistics, and quantify their informativeness using EIG. We
demonstrate the power of the proposed approach to study the effect of data
corruptions and experimental designs on the calibration results.
Related papers
- Model-free Methods for Event History Analysis and Efficient Adjustment (PhD Thesis) [55.2480439325792]
This thesis is a series of independent contributions to statistics unified by a model-free perspective.
The first chapter elaborates on how a model-free perspective can be used to formulate flexible methods that leverage prediction techniques from machine learning.
The second chapter studies the concept of local independence, which describes whether the evolution of one process is directly influenced by another.
arXiv Detail & Related papers (2025-02-11T19:24:09Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - Stochastic stiffness identification and response estimation of
Timoshenko beams via physics-informed Gaussian processes [0.0]
This paper presents a physics-informed Gaussian process (GP) model for Timoshenko beam elements.
The proposed approach is effective at identifying structural parameters and is capable of fusing data from heterogeneous and multi-fidelity sensors.
arXiv Detail & Related papers (2023-09-21T08:22:12Z) - Amortized Bayesian Inference of GISAXS Data with Normalizing Flows [0.10752246796855561]
We propose a simulation-based framework that combines variational auto-encoders and normalizing flows to estimate the posterior distribution of object parameters.
We demonstrate that our method reduces the inference cost by orders of magnitude while producing consistent results with ABC.
arXiv Detail & Related papers (2022-10-04T12:09:57Z) - Quantile-constrained Wasserstein projections for robust interpretability
of numerical and machine learning models [18.771531343438227]
The study of black-box models is often based on sensitivity analysis involving a probabilistic structure imposed on the inputs.
Our work aim at unifying the UQ and ML interpretability approaches, by providing relevant and easy-to-use tools for both paradigms.
arXiv Detail & Related papers (2022-09-23T11:58:03Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Pseudo-Spherical Contrastive Divergence [119.28384561517292]
We propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum learning likelihood of energy-based models.
PS-CD avoids the intractable partition function and provides a generalized family of learning objectives.
arXiv Detail & Related papers (2021-11-01T09:17:15Z) - Identifiable Energy-based Representations: An Application to Estimating
Heterogeneous Causal Effects [83.66276516095665]
Conditional average treatment effects (CATEs) allow us to understand the effect heterogeneity across a large population of individuals.
Typical CATE learners assume all confounding variables are measured in order for the CATE to be identifiable.
We propose an energy-based model (EBM) that learns a low-dimensional representation of the variables by employing a noise contrastive loss function.
arXiv Detail & Related papers (2021-08-06T10:39:49Z) - Scalable Statistical Inference of Photometric Redshift via Data
Subsampling [0.3222802562733786]
Handling big data has largely been a major bottleneck in traditional statistical models.
We develop a data-driven statistical modeling framework that combines the uncertainties from an ensemble of statistical models.
We demonstrate this method on a photometric redshift estimation problem in cosmology.
arXiv Detail & Related papers (2021-03-30T02:49:50Z) - Statistical aspects of nuclear mass models [0.0]
We study the information content of nuclear masses from the perspective of global models of nuclear binding energies.
We employ a number of statistical methods and diagnostic tools, including Bayesian calibration, Bayesian model averaging, chi-square correlation analysis, principal component analysis, and empirical coverage probability.
arXiv Detail & Related papers (2020-02-11T00:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.