RoPINN: Region Optimized Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2405.14369v3
- Date: Wed, 23 Oct 2024 02:26:20 GMT
- Title: RoPINN: Region Optimized Physics-Informed Neural Networks
- Authors: Haixu Wu, Huakun Luo, Yuezhou Ma, Jianmin Wang, Mingsheng Long,
- Abstract summary: Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
- Score: 66.38369833561039
- License:
- Abstract: Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs) by enforcing outputs and gradients of deep models to satisfy target equations. Due to the limitation of numerical computation, PINNs are conventionally optimized on finite selected points. However, since PDEs are usually defined on continuous domains, solely optimizing models on scattered points may be insufficient to obtain an accurate solution for the whole domain. To mitigate this inherent deficiency of the default scatter-point optimization, this paper proposes and theoretically studies a new training paradigm as region optimization. Concretely, we propose to extend the optimization process of PINNs from isolated points to their continuous neighborhood regions, which can theoretically decrease the generalization error, especially for hidden high-order constraints of PDEs. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm, which is implemented by a straightforward but effective Monte Carlo sampling method. By calibrating the sampling process into trust regions, RoPINN finely balances optimization and generalization error. Experimentally, RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation. Code is available at this repository: https://github.com/thuml/RoPINN.
Related papers
- ProPINN: Demystifying Propagation Failures in Physics-Informed Neural Networks [71.02216400133858]
Physics-informed neural networks (PINNs) have earned high expectations in solving partial differential equations (PDEs)
Previous research observed the propagation failure phenomenon of PINNs.
This paper provides the first formal and in-depth study of propagation failure and its root cause.
arXiv Detail & Related papers (2025-02-02T13:56:38Z) - PIG: Physics-Informed Gaussians as Adaptive Parametric Mesh Representations [5.4087282763977855]
The approximation of Partial Differential Equations (PDEs) using neural networks has seen significant advancements.
PINNs often suffer from limited accuracy due to the spectral bias of Multi-Layer Perceptrons (MLPs), which struggle to learn high-frequency and non-linear components.
We propose Physics-Informed Gaussians (PIGs), which combine feature embeddings using Gaussian functions with a lightweight neural network.
arXiv Detail & Related papers (2024-12-08T16:58:29Z) - PACMANN: Point Adaptive Collocation Method for Artificial Neural Networks [44.99833362998488]
PINNs minimize a loss function which includes the PDE residual determined for a set of collocation points.
Previous work has shown that the number and distribution of these collocation points have a significant influence on the accuracy of the PINN solution.
We present the Point Adaptive Collocation Method for Artificial Neural Networks (PACMANN)
arXiv Detail & Related papers (2024-11-29T11:31:11Z) - SetPINNs: Set-based Physics-informed Neural Networks [31.193471532024407]
We introduce SetPINNs, a framework that effectively captures local dependencies.
We partition a domain into sets to model local dependencies while simultaneously enforcing physical laws.
arXiv Detail & Related papers (2024-09-30T11:41:58Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Deep NURBS -- Admissible Physics-informed Neural Networks [0.0]
We propose a new numerical scheme for physics-informed neural networks (PINNs) that enables precise and inexpensive solution for partial differential equations (PDEs)
The proposed approach combines admissible NURBS parametrizations required to define the physical domain and the Dirichlet boundary conditions with a PINN solver.
arXiv Detail & Related papers (2022-10-25T10:35:45Z) - Bi-level Physics-Informed Neural Networks for PDE Constrained
Optimization using Broyden's Hypergradients [29.487375792661005]
We present a novel bi-level optimization framework to solve PDE constrained optimization problems.
For the inner loop optimization, we adopt PINNs to solve the PDE constraints only.
For the outer loop, we design a novel method by using Broyden'simat method based on the Implicit Function Theorem.
arXiv Detail & Related papers (2022-09-15T06:21:24Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Neural Proximal/Trust Region Policy Optimization Attains Globally
Optimal Policy [119.12515258771302]
We show that a variant of PPOO equipped with over-parametrization converges to globally optimal networks.
The key to our analysis is the iterate of infinite gradient under a notion of one-dimensional monotonicity, where the gradient and are instant by networks.
arXiv Detail & Related papers (2019-06-25T03:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.