Machine Learning-powered Compact Modeling of Stochastic Electronic
Devices using Mixture Density Networks
- URL: http://arxiv.org/abs/2311.05820v1
- Date: Fri, 10 Nov 2023 01:34:18 GMT
- Title: Machine Learning-powered Compact Modeling of Stochastic Electronic
Devices using Mixture Density Networks
- Authors: Jack Hutchins, Shamiul Alam, Dana S. Rampini, Bakhrom G. Oripov, Adam
N. McCaughan, Ahmedullah Aziz
- Abstract summary: Conventional deterministic models fall short when it comes to capture the subtle yet critical variability exhibited by many electronic components.
We present an innovative approach that transcends the limitations of traditional modeling techniques by harnessing the power of machine learning.
This paper marks a significant step forward in the quest for accurate and versatile compact models, poised to drive innovation in the realm of electronic circuits.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The relentless pursuit of miniaturization and performance enhancement in
electronic devices has led to a fundamental challenge in the field of circuit
design and simulation: how to accurately account for the inherent stochastic
nature of certain devices. While conventional deterministic models have served
as indispensable tools for circuit designers, they fall short when it comes to
capture the subtle yet critical variability exhibited by many electronic
components. In this paper, we present an innovative approach that transcends
the limitations of traditional modeling techniques by harnessing the power of
machine learning, specifically Mixture Density Networks (MDNs), to faithfully
represent and simulate the stochastic behavior of electronic devices. We
demonstrate our approach to model heater cryotrons, where the model is able to
capture the stochastic switching dynamics observed in the experiment. Our model
shows 0.82% mean absolute error for switching probability. This paper marks a
significant step forward in the quest for accurate and versatile compact
models, poised to drive innovation in the realm of electronic circuits.
Related papers
- Real-Time Detection of Electronic Components in Waste Printed Circuit Boards: A Transformer-Based Approach [4.849820402342814]
We have proposed a practical approach that involves selective disassembling of the different types of electronic components from WPCBs.
In this paper we evaluate the real-time accuracy of electronic component detection and localization of the Real-Time DEtection TRansformer model architecture.
arXiv Detail & Related papers (2024-09-24T22:59:52Z) - Exploring Model Transferability through the Lens of Potential Energy [78.60851825944212]
Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models.
Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels.
We present an insightful physics-inspired approach named PED to address these challenges.
arXiv Detail & Related papers (2023-08-29T07:15:57Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - MINN: Learning the dynamics of differential-algebraic equations and
application to battery modeling [3.900623554490941]
We propose a novel architecture for generating model-integrated neural networks (MINN)
MINN allows integration on the level of learning physics-based dynamics of the system.
We apply the proposed neural network architecture to model the electrochemical dynamics of lithium-ion batteries.
arXiv Detail & Related papers (2023-04-27T09:11:40Z) - Your Autoregressive Generative Model Can be Better If You Treat It as an
Energy-Based One [83.5162421521224]
We propose a unique method termed E-ARM for training autoregressive generative models.
E-ARM takes advantage of a well-designed energy-based learning objective.
We show that E-ARM can be trained efficiently and is capable of alleviating the exposure bias problem.
arXiv Detail & Related papers (2022-06-26T10:58:41Z) - End-to-End Learning of Hybrid Inverse Dynamics Models for Precise and
Compliant Impedance Control [16.88250694156719]
We present a novel hybrid model formulation that enables us to identify fully physically consistent inertial parameters of a rigid body dynamics model.
We compare our approach against state-of-the-art inverse dynamics models on a 7 degree of freedom manipulator.
arXiv Detail & Related papers (2022-05-27T07:39:28Z) - Integrating Physics-Based Modeling with Machine Learning for Lithium-Ion Batteries [4.946066838162504]
This paper proposes two new frameworks to integrate physics-based models with machine learning to achieve high-precision modeling for LiBs.
The frameworks are characterized by informing the machine learning model of the state information of the physical model.
The study further expands to conduct aging-aware hybrid modeling, leading to the design of a hybrid model conscious of the state-of-health to make prediction.
arXiv Detail & Related papers (2021-12-24T07:39:02Z) - Using scientific machine learning for experimental bifurcation analysis
of dynamic systems [2.204918347869259]
This study focuses on training universal differential equation (UDE) models for physical nonlinear dynamical systems with limit cycles.
We consider examples where training data is generated by numerical simulations, whereas we also employ the proposed modelling concept to physical experiments.
We use both neural networks and Gaussian processes as universal approximators alongside the mechanistic models to give a critical assessment of the accuracy and robustness of the UDE modelling approach.
arXiv Detail & Related papers (2021-10-22T15:43:03Z) - Efficient pre-training objectives for Transformers [84.64393460397471]
We study several efficient pre-training objectives for Transformers-based models.
We prove that eliminating the MASK token and considering the whole output during the loss are essential choices to improve performance.
arXiv Detail & Related papers (2021-04-20T00:09:37Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.