Reliable and Efficient Inverse Analysis using Physics-Informed Neural Networks with Distance Functions and Adaptive Weight Tuning
- URL: http://arxiv.org/abs/2504.18091v1
- Date: Fri, 25 Apr 2025 05:39:09 GMT
- Title: Reliable and Efficient Inverse Analysis using Physics-Informed Neural Networks with Distance Functions and Adaptive Weight Tuning
- Authors: Shota Deguchi, Mitsuteru Asai,
- Abstract summary: PINNs have attracted significant attention in scientific machine learning for their capability to solve forward and inverse problems governed by partial equations.<n>In this paper we show how to use PINNs to introduce adaptive boundary problems using PINNs and to ensure reliable inverse analysis.<n>This approach offers a reliable and efficient framework for inverse analysis using PINNs in a range of potential applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-informed neural networks have attracted significant attention in scientific machine learning for their capability to solve forward and inverse problems governed by partial differential equations. However, the accuracy of PINN solutions is often limited by the treatment of boundary conditions. Conventional penalty-based methods, which incorporate boundary conditions as penalty terms in the loss function, cannot guarantee exact satisfaction of the given boundary conditions and are highly sensitive to the choice of penalty parameters. This paper demonstrates that distance functions, specifically R-functions, can be leveraged to enforce boundary conditions, overcoming these limitations. R-functions provide normalized distance fields, enabling accurate representation of boundary geometries, including non-convex domains, and facilitating various types of boundary conditions. We extend this distance function-based boundary condition imposition method to inverse problems using PINNs and introduce an adaptive weight tuning technique to ensure reliable and efficient inverse analysis. We demonstrate the efficacy of the method through several numerical experiments. Numerical results show that the proposed method solves inverse problems more accurately and efficiently than penalty-based methods, even in the presence of complex non-convex geometries. This approach offers a reliable and efficient framework for inverse analysis using PINNs, with potential applications across a wide range of engineering problems.
Related papers
- PINN-FEM: A Hybrid Approach for Enforcing Dirichlet Boundary Conditions in Physics-Informed Neural Networks [1.1060425537315088]
Physics-Informed Neural Networks (PINNs) solve partial differential equations (PDEs)<n>We propose a hybrid approach, PINN-FEM, which combines PINNs with finite element methods (FEM) to impose strong Dirichlet boundary conditions via domain decomposition.<n>This method incorporates FEM-based representations near the boundary, ensuring exact enforcement without compromising convergence.
arXiv Detail & Related papers (2025-01-14T00:47:15Z) - Fredholm Integral Equations Neural Operator (FIE-NO) for Data-Driven Boundary Value Problems [0.0]
We present a physics-guided operator learning method (FIE-NO) for solving Boundary Value Problems (BVPs) with irregular boundaries.
We demonstrate that the proposed method achieves superior performance in addressing BVPs.
Our approach can generalize across multiple scenarios, including those with unknown equation forms and intricate boundary shapes.
arXiv Detail & Related papers (2024-08-20T00:15:27Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Enhancing Hypergradients Estimation: A Study of Preconditioning and
Reparameterization [49.73341101297818]
Bilevel optimization aims to optimize an outer objective function that depends on the solution to an inner optimization problem.
The conventional method to compute the so-called hypergradient of the outer problem is to use the Implicit Function Theorem (IFT)
We study the error of the IFT method and analyze two strategies to reduce this error.
arXiv Detail & Related papers (2024-02-26T17:09:18Z) - Spectral operator learning for parametric PDEs without data reliance [6.7083321695379885]
We introduce a novel operator learning-based approach for solving parametric partial differential equations (PDEs) without the need for data harnessing.
The proposed framework demonstrates superior performance compared to existing scientific machine learning techniques.
arXiv Detail & Related papers (2023-10-03T12:37:15Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Exact imposition of boundary conditions with distance functions in
physics-informed deep neural networks [0.5804039129951741]
We introduce geometry-aware trial functions in artifical neural networks to improve the training in deep learning for partial differential equations.
To exactly impose homogeneous Dirichlet boundary conditions, the trial function is taken as $phi$ multiplied by the PINN approximation.
We present numerical solutions for linear and nonlinear boundary-value problems over domains with affine and curved boundaries.
arXiv Detail & Related papers (2021-04-17T03:02:52Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Extreme Theory of Functional Connections: A Physics-Informed Neural
Network Method for Solving Parametric Differential Equations [0.0]
We present a physics-informed method for solving problems involving parametric differential equations (DEs) called X-TFC.
X-TFC differs from PINN and Deep-TFC; whereas PINN and Deep-TFC use a deep-NN, X-TFC uses a single-layer NN, or more precisely, an Extreme Learning Machine, ELM.
arXiv Detail & Related papers (2020-05-15T22:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.