Approximate Bayesian Computation for Physical Inverse Modeling
- URL: http://arxiv.org/abs/2111.13296v1
- Date: Fri, 26 Nov 2021 02:23:05 GMT
- Title: Approximate Bayesian Computation for Physical Inverse Modeling
- Authors: Neel Chatterjee, Somya Sharma, Sarah Swisher, Snigdhansu Chatterjee
- Abstract summary: We propose a new method for automating the model parameter extraction process resulting in an accurate model fitting.
It is shown that the extracted parameters can be accurately predicted from the mobility curves using gradient boosted trees.
This work also provides a comparative analysis of the proposed framework with fine-tuned neural networks wherein the proposed framework is shown to perform better.
- Score: 0.32771631221674324
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Semiconductor device models are essential to understand the charge transport
in thin film transistors (TFTs). Using these TFT models to draw inference
involves estimating parameters used to fit to the experimental data. These
experimental data can involve extracted charge carrier mobility or measured
current. Estimating these parameters help us draw inferences about device
performance. Fitting a TFT model for a given experimental data using the model
parameters relies on manual fine tuning of multiple parameters by human
experts. Several of these parameters may have confounding effects on the
experimental data, making their individual effect extraction a non-intuitive
process during manual tuning. To avoid this convoluted process, we propose a
new method for automating the model parameter extraction process resulting in
an accurate model fitting. In this work, model choice based approximate
Bayesian computation (aBc) is used for generating the posterior distribution of
the estimated parameters using observed mobility at various gate voltage
values. Furthermore, it is shown that the extracted parameters can be
accurately predicted from the mobility curves using gradient boosted trees.
This work also provides a comparative analysis of the proposed framework with
fine-tuned neural networks wherein the proposed framework is shown to perform
better.
Related papers
- Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - A transport approach to sequential simulation-based inference [0.0]
We present a new transport-based approach to efficiently perform sequential Bayesian inference of static model parameters.
The strategy is based on the extraction of conditional distribution from the joint distribution of parameters and data, via the estimation of structured (e.g., block triangular) transport maps.
This allow gradient-based characterizations of posterior density via transport maps in a model-free, online phase.
arXiv Detail & Related papers (2023-08-26T18:53:48Z) - Scaling & Shifting Your Features: A New Baseline for Efficient Model
Tuning [126.84770886628833]
Existing finetuning methods either tune all parameters of the pretrained model (full finetuning) or only tune the last linear layer (linear probing)
We propose a new parameter-efficient finetuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance full finetuning.
arXiv Detail & Related papers (2022-10-17T08:14:49Z) - An efficient estimation of time-varying parameters of dynamic models by
combining offline batch optimization and online data assimilation [0.0]
I present an efficient and practical method to estimate the time-varying parameters of relatively low dimensional models.
I propose combining offline batch optimization and online data assimilation.
arXiv Detail & Related papers (2021-10-24T20:12:12Z) - SWAT Watershed Model Calibration using Deep Learning [0.860255319568951]
We present a fast, accurate, and reliable methodology to calibrate the SWAT model using deep learning (DL)
We develop DL-enabled inverse models based on convolutional neural networks to ingest streamflow data and estimate the SWAT model parameters.
Our results show that the DL models-based calibration is better than traditional parameter estimation methods.
arXiv Detail & Related papers (2021-10-06T22:56:23Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Combining data assimilation and machine learning to estimate parameters
of a convective-scale model [0.0]
Errors in the representation of clouds in convection-permitting numerical weather prediction models can be introduced by different sources.
In this work, we look at the problem of parameter estimation through an artificial intelligence lens by training two types of artificial neural networks.
arXiv Detail & Related papers (2021-09-07T09:17:29Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.