Ga$_2$O$_3$ TCAD Mobility Parameter Calibration using Simulation Augmented Machine Learning with Physics Informed Neural Network
- URL: http://arxiv.org/abs/2504.02283v1
- Date: Thu, 03 Apr 2025 05:09:43 GMT
- Title: Ga$_2$O$_3$ TCAD Mobility Parameter Calibration using Simulation Augmented Machine Learning with Physics Informed Neural Network
- Authors: Le Minh Long Nguyen, Edric Ong, Matthew Eng, Yuhao Zhang, Hiu Yung Wong,
- Abstract summary: We show the possibility of performing automatic Technology Computer-Aided-Design (TCAD) parameter calibration using machine learning, verified with experimental data.<n>A machine comprised of an autoencoder (AE) and a neural network (NN) (AE-NN) is used.<n>TCAD extracted parameters shows that the quality of the parameters is as good as an expert at the pre-turned-on regime but not in the on-state regime.
- Score: 3.194221922047046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we demonstrate the possibility of performing automatic Technology Computer-Aided-Design (TCAD) parameter calibration using machine learning, verified with experimental data. The machine only needs to be trained by TCAD data. Schottky Barrier Diode (SBD) fabricated with emerging ultra-wide-bandgap material, Gallium Oxide (Ga$_2$O$_3$), is measured and its current-voltage (IV) is used for Ga$_2$O$_3$ Philips Unified Mobility (PhuMob) model parameters, effective anode workfunction, and ambient temperature extraction (7 parameters). A machine comprised of an autoencoder (AE) and a neural network (NN) (AE-NN) is used. Ga$_2$O$_3$ PhuMob parameters are extracted from the noisy experimental curves. TCAD simulation with the extracted parameters shows that the quality of the parameters is as good as an expert's calibration at the pre-turned-on regime but not in the on-state regime. By using a simple physics-informed neural network (PINN) (AE-PINN), the machine performs as well as the human expert in all regimes.
Related papers
- Deep Learning to Automate Parameter Extraction and Model Fitting of Two-Dimensional Transistors [0.0]
We present a deep learning approach to extract physical parameters of 2D transistors from electrical measurements.<n>We train a secondary neural network to approximate a physics-based device simulator.<n>This method enables high-quality fits after training the neural network on electrical data generated from simulations of 500 devices.
arXiv Detail & Related papers (2025-07-07T15:46:25Z) - Optimizing Hyperparameters for Quantum Data Re-Uploaders in Calorimetric Particle Identification [11.099632666738177]
We present an application of a single-qubit Data Re-Uploading (QRU) quantum model for particle classification in calorimetric experiments.
This model requires minimal qubits while delivering strong classification performance.
arXiv Detail & Related papers (2024-12-16T23:10:00Z) - Machine learning Hubbard parameters with equivariant neural networks [0.0]
We present a machine learning model based on equivariant neural networks.<n>We target here the prediction of Hubbard parameters computed self-consistently with iterative linear-response calculations.<n>Our model achieves mean absolute relative errors of 3% and 5% for Hubbard $U$ and $V$ parameters, respectively.
arXiv Detail & Related papers (2024-06-04T16:21:24Z) - Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z) - A Three-regime Model of Network Pruning [47.92525418773768]
We use temperature-like and load-like parameters to model the impact of neural network (NN) training hyper parameters on pruning performance.
A key empirical result we identify is a sharp transition phenomenon: depending on the value of a load-like parameter in the pruned model, increasing the value of a temperature-like parameter in the pre-pruned model may either enhance or impair subsequent pruning performance.
Our model reveals that the dichotomous effect of high temperature is associated with transitions between distinct types of global structures in the post-pruned model.
arXiv Detail & Related papers (2023-05-28T08:09:25Z) - Automatic Parameterization for Aerodynamic Shape Optimization via Deep
Geometric Learning [60.69217130006758]
We propose two deep learning models that fully automate shape parameterization for aerodynamic shape optimization.
Both models are optimized to parameterize via deep geometric learning to embed human prior knowledge into learned geometric patterns.
We perform shape optimization experiments on 2D airfoils and discuss the applicable scenarios for the two models.
arXiv Detail & Related papers (2023-05-03T13:45:40Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - A CNN-Transformer Deep Learning Model for Real-time Sleep Stage
Classification in an Energy-Constrained Wireless Device [2.5672176409865686]
This paper proposes a deep learning (DL) model for automatic sleep stage classification based on single-channel EEG data.
The model was designed to run on energy and memory-constrained devices for real-time operation with local processing.
We tested a reduced-sized version of the proposed model on a low-cost Arduino Nano 33 BLE board and it was fully functional and accurate.
arXiv Detail & Related papers (2022-11-20T16:22:30Z) - Physics-informed Variational Autoencoders for Improved Robustness to Environmental Factors of Variation [0.6384650391969042]
p$3$VAE is a variational autoencoder that integrates prior physical knowledge about the latent factors of variation related to the data acquisition conditions.<n>We introduce a semi-supervised learning algorithm that strikes a balance between the machine learning part and the physics part.
arXiv Detail & Related papers (2022-10-19T09:32:15Z) - Tuning arrays with rays: Physics-informed tuning of quantum dot charge
states [0.0]
Quantum computers based on gate-defined quantum dots (QDs) are expected to scale.
As the number of qubits increases, the burden of manually calibrating these systems becomes unreasonable.
Here, we demonstrate an intuitive, reliable, and data-efficient set of tools for an automated global state and charge tuning.
arXiv Detail & Related papers (2022-09-08T14:17:49Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - On the Sparsity of Neural Machine Translation Models [65.49762428553345]
We investigate whether redundant parameters can be reused to achieve better performance.
Experiments and analyses are systematically conducted on different datasets and NMT architectures.
arXiv Detail & Related papers (2020-10-06T11:47:20Z) - Predicting atmospheric optical properties for radiative transfer
computations using neural networks [0.0]
We develop a machine learning-based parametrization for the gaseous optical properties by training neural networks to emulate a modern radiation parameterization (RRTMGP)
Our neural network-based gas optics parametrization is up to 4 times faster than RRTMGP, depending on the size of the neural networks.
We conclude that our machine learning-based parametrization can speed-up radiative transfer computations whilst retaining high accuracy.
arXiv Detail & Related papers (2020-05-05T15:00:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.