Semi-supervised physics guided deep learning framework for predicting
the I-V characteristics of GAN HEMT
- URL: http://arxiv.org/abs/2110.10724v2
- Date: Fri, 22 Oct 2021 06:06:22 GMT
- Title: Semi-supervised physics guided deep learning framework for predicting
the I-V characteristics of GAN HEMT
- Authors: Shivanshu Mishra, Bipin Gaikwad and Nidhi Chaturvedi
- Abstract summary: The framework is generic in nature and can be applied to model a phenomenon from other fields of research too as long as its behaviour is known.
A semi-supervised physics guided neural network (SPGNN) has been developed that predicts I-V characteristics of a gallium nitride-based high electron mobility transistor (GaN HEMT)
The SPGNN significantly reduces the requirement of the training data by more than 80% for achieving similar or better performance than a traditional neural network (TNN) even for unseen conditions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This letter proposes a novel deep learning framework (DLF) that addresses two
major hurdles in the adoption of deep learning techniques for solving
physics-based problems: 1) requirement of the large dataset for training the DL
model, 2) consistency of the DL model with the physics of the phenomenon. The
framework is generic in nature and can be applied to model a phenomenon from
other fields of research too as long as its behaviour is known. To demonstrate
the technique, a semi-supervised physics guided neural network (SPGNN) has been
developed that predicts I-V characteristics of a gallium nitride-based high
electron mobility transistor (GaN HEMT). A two-stage training method is
proposed, where in the first stage, the DL model is trained via the
unsupervised learning method using the I-V equations of a field-effect
transistor as a loss function of the model that incorporates physical behaviors
in the DL model and in the second stage, the DL model has been fine-tuned with
a very small set of experimental data. The SPGNN significantly reduces the
requirement of the training data by more than 80% for achieving similar or
better performance than a traditional neural network (TNN) even for unseen
conditions. The SPGNN predicts 32.4% of the unseen test data with less than 1%
of error and only 0.4% of the unseen test data with more than 10% of error.
Related papers
- Physics-Inspired Deep Learning and Transferable Models for Bridge Scour Prediction [2.451326684641447]
We introduce scour physics-inspired neural networks (SPINNs) for bridge scour prediction using deep learning.
SPINNs integrate physics-based, empirical equations into deep neural networks and are trained using site-specific historical scour monitoring data.
Despite variation in performance, SPINNs outperformed pure data-driven models in the majority of cases.
arXiv Detail & Related papers (2024-07-01T13:08:09Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - Enhancing Deep Neural Network Training Efficiency and Performance through Linear Prediction [0.0]
Deep neural networks (DNN) have achieved remarkable success in various fields, including computer vision and natural language processing.
This paper aims to propose a method to optimize the training effectiveness of DNN, with the goal of improving model performance.
arXiv Detail & Related papers (2023-10-17T03:11:30Z) - Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams [52.77024349608834]
We classify transient noise signals (i.e.glitches) and gravitational waves in data from the Advanced LIGO detectors.
We use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset.
We also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels.
arXiv Detail & Related papers (2023-03-24T11:12:37Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - A transfer learning enhanced the physics-informed neural network model
for vortex-induced vibration [0.0]
This paper proposed a transfer learning enhanced the physics-informed neural network (PINN) model to study the VIV (2D)
The physics-informed neural network, when used in conjunction with the transfer learning method, enhances learning efficiency and keeps predictability in the target task by common characteristics knowledge from the source model without requiring a huge quantity of datasets.
arXiv Detail & Related papers (2021-12-29T08:20:23Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.