Impact Study of Numerical Discretization Accuracy on Parameter
Reconstructions and Model Parameter Distributions
- URL: http://arxiv.org/abs/2305.02663v2
- Date: Wed, 21 Jun 2023 10:24:58 GMT
- Title: Impact Study of Numerical Discretization Accuracy on Parameter
Reconstructions and Model Parameter Distributions
- Authors: Matthias Plock, Martin Hammerschmidt, Sven Burger, Philipp-Immanuel
Schneider, Christof Sch\"utte
- Abstract summary: We use a finite element numerical model to reconstruct the parameters of a nano structured line grating.
We investigated the impact of the finite element ansatz functions on the reconstructed parameters as well as the model parameter distributions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In optical nano metrology numerical models are used widely for parameter
reconstructions. Using the Bayesian target vector optimization method we fit a
finite element numerical model to a Grazing Incidence X-Ray fluorescence data
set in order to obtain the geometrical parameters of a nano structured line
grating. Gaussian process, stochastic machine learning surrogate models, were
trained during the reconstruction and afterwards sampled with a Markov chain
Monte Carlo sampler to determine the distribution of the reconstructed model
parameters. The numerical discretization parameters of the used finite element
model impact the numerical discretization error of the forward model. We
investigated the impact of the polynomial order of the finite element ansatz
functions on the reconstructed parameters as well as on the model parameter
distributions. We showed that such a convergence study allows to determine
numerical parameters which allows for efficient and accurate reconstruction
results.
Related papers
- Efficient Covariance Estimation for Sparsified Functional Data [51.69796254617083]
proposed Random-knots (Random-knots-Spatial) and B-spline (Bspline-Spatial) estimators of the covariance function are computationally efficient.<n>Asymptotic pointwise of the covariance are obtained for sparsified individual trajectories under some regularity conditions.
arXiv Detail & Related papers (2025-11-23T00:50:33Z) - Uncertainty quantification in model discovery by distilling interpretable material constitutive models from Gaussian process posteriors [37.43688754886933]
Constitutive model discovery refers to the task of identifying an appropriate model structure.<n>We propose a four-step partially Bayesian framework for uncertainty in model discovery.<n>We demonstrate the capability of our framework for both isotropic and anisotropic experimental data as well as linear and non-linear model libraries.
arXiv Detail & Related papers (2025-10-25T16:02:03Z) - Physics-Informed Regression: Parameter Estimation in Parameter-Linear Nonlinear Dynamic Models [0.0]
We introduce the term "Physics-Informed Regression" (PIR) to describe the proposed data-driven hybrid technique.<n>PIR is tested and compared against the related technique, physics-informed neural networks (PINN), both on synthetic data.<n>It is concluded that PIR is superior to PINN for the models considered.
arXiv Detail & Related papers (2025-07-25T00:31:16Z) - Multi-fidelity Parameter Estimation Using Conditional Diffusion Models [6.934199382834925]
We present a multi-fidelity method for uncertainty quantification of parameter estimates in complex systems.
We use conditional generative models trained to sample the target conditional distribution.
We demonstrate the effectiveness of the proposed method on several numerical examples.
arXiv Detail & Related papers (2025-04-02T16:54:47Z) - Mitigating Parameter Degeneracy using Joint Conditional Diffusion Model for WECC Composite Load Model in Power Systems [2.7212274374272543]
We develop a joint conditional diffusion model-based inverse problem solver (JCDI)
JCDI incorporates a joint conditioning architecture with simultaneous inputs of multi-event observations to improve parameter generalizability.
Simulation studies on the WECC CLM show that the proposed JCDI effectively reduces uncertainties of degenerate parameters.
arXiv Detail & Related papers (2024-11-15T18:53:08Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Variational Bayesian surrogate modelling with application to robust design optimisation [0.9626666671366836]
Surrogate models provide a quick-to-evaluate approximation to complex computational models.
We consider Bayesian inference for constructing statistical surrogates with input uncertainties and dimensionality reduction.
We demonstrate intrinsic and robust structural optimisation problems where cost functions depend on a weighted sum of the mean and standard deviation of model outputs.
arXiv Detail & Related papers (2024-04-23T09:22:35Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - On the Influence of Enforcing Model Identifiability on Learning dynamics
of Gaussian Mixture Models [14.759688428864159]
We propose a technique for extracting submodels from singular models.
Our method enforces model identifiability during training.
We show how the method can be applied to more complex models like deep neural networks.
arXiv Detail & Related papers (2022-06-17T07:50:22Z) - Counting Phases and Faces Using Bayesian Thermodynamic Integration [77.34726150561087]
We introduce a new approach to reconstruction of the thermodynamic functions and phase boundaries in two-parametric statistical mechanics systems.
We use the proposed approach to accurately reconstruct the partition functions and phase diagrams of the Ising model and the exactly solvable non-equilibrium TASEP.
arXiv Detail & Related papers (2022-05-18T17:11:23Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Bayesian multiscale deep generative model for the solution of
high-dimensional inverse problems [0.0]
A novel multiscale Bayesian inference approach is introduced based on deep probabilistic generative models.
The method allows high-dimensional parameter estimation while exhibiting stability, efficiency and accuracy.
arXiv Detail & Related papers (2021-02-04T11:47:21Z) - On the Sparsity of Neural Machine Translation Models [65.49762428553345]
We investigate whether redundant parameters can be reused to achieve better performance.
Experiments and analyses are systematically conducted on different datasets and NMT architectures.
arXiv Detail & Related papers (2020-10-06T11:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.