Boundary-Decoder network for inverse prediction of capacitor electrostatic analysis
- URL: http://arxiv.org/abs/2412.00113v1
- Date: Thu, 28 Nov 2024 05:51:00 GMT
- Title: Boundary-Decoder network for inverse prediction of capacitor electrostatic analysis
- Authors: Kart-Leong Lim, Rahul Dutta, Mihai Rotaru,
- Abstract summary: We propose an end-to-end deep learning approach to model parameter changes to the boundary conditions.
It is shown that our method can significantly outperform both plain vanilla deep learning (NN) and physics informed neural net (PINN) under dynamic boundary condition.
- Score: 0.49157446832511503
- License:
- Abstract: Traditional electrostatic simulation are meshed-based methods which convert partial differential equations into an algebraic system of equations and their solutions are approximated through numerical methods. These methods are time consuming and any changes in their initial or boundary conditions will require solving the numerical problem again. Newer computational methods such as the physics informed neural net (PINN) similarly require re-training when boundary conditions changes. In this work, we propose an end-to-end deep learning approach to model parameter changes to the boundary conditions. The proposed method is demonstrated on the test problem of a long air-filled capacitor structure. The proposed approach is compared to plain vanilla deep learning (NN) and PINN. It is shown that our method can significantly outperform both NN and PINN under dynamic boundary condition as well as retaining its full capability as a forward model.
Related papers
- Stiff Transfer Learning for Physics-Informed Neural Networks [1.5361702135159845]
We propose a novel approach, stiff transfer learning for physics-informed neural networks (STL-PINNs) to tackle stiff ordinary differential equations (ODEs) and partial differential equations (PDEs)
Our methodology involves training a Multi-Head-PINN in a low-stiff regime, and obtaining the final solution in a high stiff regime by transfer learning.
This addresses the failure modes related to stiffness in PINNs while maintaining computational efficiency by computing "one-shot" solutions.
arXiv Detail & Related papers (2025-01-28T20:27:38Z) - FEM-based Neural Networks for Solving Incompressible Fluid Flows and Related Inverse Problems [41.94295877935867]
numerical simulation and optimization of technical systems described by partial differential equations is expensive.
A comparatively new approach in this context is to combine the good approximation properties of neural networks with the classical finite element method.
In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively.
arXiv Detail & Related papers (2024-09-06T07:17:01Z) - Neural Networks-based Random Vortex Methods for Modelling Incompressible Flows [0.0]
We introduce a novel Neural Networks-based approach for approximating solutions to the (2D) incompressible Navier--Stokes equations.
Our algorithm uses a Neural Network (NN), that approximates the vorticity based on a loss function that uses a computationally efficient formulation of the Random Vortex Dynamics.
arXiv Detail & Related papers (2024-05-22T14:36:23Z) - Learning the solution operator of two-dimensional incompressible
Navier-Stokes equations using physics-aware convolutional neural networks [68.8204255655161]
We introduce a technique with which it is possible to learn approximate solutions to the steady-state Navier--Stokes equations in varying geometries without the need of parametrization.
The results of our physics-aware CNN are compared to a state-of-the-art data-based approach.
arXiv Detail & Related papers (2023-08-04T05:09:06Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - Exact imposition of boundary conditions with distance functions in
physics-informed deep neural networks [0.5804039129951741]
We introduce geometry-aware trial functions in artifical neural networks to improve the training in deep learning for partial differential equations.
To exactly impose homogeneous Dirichlet boundary conditions, the trial function is taken as $phi$ multiplied by the PINN approximation.
We present numerical solutions for linear and nonlinear boundary-value problems over domains with affine and curved boundaries.
arXiv Detail & Related papers (2021-04-17T03:02:52Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z) - Physics Informed Deep Learning for Transport in Porous Media. Buckley
Leverett Problem [0.0]
We present a new hybrid physics-based machine-learning approach to reservoir modeling.
The methodology relies on a series of deep adversarial neural network architecture with physics-based regularization.
The proposed methodology is a simple and elegant way to instill physical knowledge to machine-learning algorithms.
arXiv Detail & Related papers (2020-01-15T08:20:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.