A Unified Hard-Constraint Framework for Solving Geometrically Complex
PDEs
- URL: http://arxiv.org/abs/2210.03526v6
- Date: Sun, 4 Jun 2023 16:40:38 GMT
- Title: A Unified Hard-Constraint Framework for Solving Geometrically Complex
PDEs
- Authors: Songming Liu, Zhongkai Hao, Chengyang Ying, Hang Su, Jun Zhu, Ze Cheng
- Abstract summary: We present a unified framework for solving geometrically complex PDEs with neural networks.
We first introduce the "extra fields" from the mixed finite element method to reformulate the PDEs.
We derive the general solutions of the BCs analytically, which are employed to construct an ansatz that automatically satisfies the BCs.
- Score: 25.52271761404213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a unified hard-constraint framework for solving geometrically
complex PDEs with neural networks, where the most commonly used Dirichlet,
Neumann, and Robin boundary conditions (BCs) are considered. Specifically, we
first introduce the "extra fields" from the mixed finite element method to
reformulate the PDEs so as to equivalently transform the three types of BCs
into linear equations. Based on the reformulation, we derive the general
solutions of the BCs analytically, which are employed to construct an ansatz
that automatically satisfies the BCs. With such a framework, we can train the
neural networks without adding extra loss terms and thus efficiently handle
geometrically complex PDEs, alleviating the unbalanced competition between the
loss terms corresponding to the BCs and PDEs. We theoretically demonstrate that
the "extra fields" can stabilize the training process. Experimental results on
real-world geometrically complex PDEs showcase the effectiveness of our method
compared with state-of-the-art baselines.
Related papers
- BEACONS: Bounded-Error, Algebraically-Composable Neural Solvers for Partial Differential Equations [0.0]
We show how it is possible to circumvent limitations by constructing formally-verified neural network solvers for PDEs.<n>We show how it is possible to construct rigorous extrapolatory bounds on the worst-case Linf errors of shallow neural network approximations.<n>The resulting framework, called BEACONS, comprises both an automatic code-proving for the neural solvers themselves, as well as a bespoke automated theorem-generator system for producing machine-checkable certificates of correctness.
arXiv Detail & Related papers (2026-02-16T15:49:19Z) - Boundary condition enforcement with PINNs: a comparative study and verification on 3D geometries [0.0]
Physics-informed neural networks (PINNs) have been studied extensively as a novel technique for solving forward and inverse problems in physics and engineering.<n>There have been limited studies of PINNs on complex three-dimensional geometries, as the lack of mesh and the reliance on the strong form of the partial differential equation (PDE) make boundary condition enforcement challenging.<n>This work represents a step in the direction of establishing PINNs as a mature numerical method, capable of competing head-to-head with incumbents such as the finite element method.
arXiv Detail & Related papers (2025-12-16T22:15:01Z) - Hybrid Iterative Solvers with Geometry-Aware Neural Preconditioners for Parametric PDEs [5.532017361572708]
We introduce Geo-DeepONet, a geometry-aware deep operator network that incorporates domain information extracted from finite element discretizations.<n>We develop a class of geometry-aware hybrid preconditioned iterative solvers by coupling Geo-DeepONet with traditional methods such as relaxation schemes and Krylov subspace algorithms.
arXiv Detail & Related papers (2025-12-16T17:06:10Z) - High precision PINNs in unbounded domains: application to singularity formulation in PDEs [83.50980325611066]
We study the choices of neural network ansatz, sampling strategy, and optimization algorithm.<n>For 1D Burgers equation, our framework can lead to a solution with very high precision.<n>For the 2D Boussinesq equation, we obtain a solution whose loss is $4$ digits smaller than that obtained in citewang2023asymptotic with fewer training steps.
arXiv Detail & Related papers (2025-06-24T02:01:44Z) - Physics-Informed Deep B-Spline Networks for Dynamical Systems [1.2999518604217852]
We propose a hybrid framework that uses a neural network to learn B-spline control points to approximate solutions to PDEs with varying system and ICBC parameters.
We provide theoretical guarantees that the proposed B-spline networks serve as universal approximators for the set of solutions of PDEs with varying ICBCs under mild conditions.
arXiv Detail & Related papers (2025-03-21T01:15:40Z) - Mechanistic PDE Networks for Discovery of Governing Equations [52.492158106791365]
We present Mechanistic PDE Networks, a model for discovery of partial differential equations from data.
The represented PDEs are then solved and decoded for specific tasks.
We develop a native, GPU-capable, parallel, sparse, and differentiable multigrid solver specialized for linear partial differential equations.
arXiv Detail & Related papers (2025-02-25T17:21:44Z) - Advancing Generalization in PINNs through Latent-Space Representations [71.86401914779019]
Physics-informed neural networks (PINNs) have made significant strides in modeling dynamical systems governed by partial differential equations (PDEs)
We propose PIDO, a novel physics-informed neural PDE solver designed to generalize effectively across diverse PDE configurations.
We validate PIDO on a range of benchmarks, including 1D combined equations and 2D Navier-Stokes equations.
arXiv Detail & Related papers (2024-11-28T13:16:20Z) - Extremization to Fine Tune Physics Informed Neural Networks for Solving Boundary Value Problems [0.1874930567916036]
Theory of Functional Connections (TFC) is used to exactly impose initial and boundary conditions (IBCs) of (I)BVPs on PINNs.
We propose a modification to the TFC framework named Reduced TFC and show a significant improvement in the training and inference time of PINNs.
arXiv Detail & Related papers (2024-06-07T23:25:13Z) - Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Transolver: A Fast Transformer Solver for PDEs on General Geometries [66.82060415622871]
We present Transolver, which learns intrinsic physical states hidden behind discretized geometries.
By calculating attention to physics-aware tokens encoded from slices, Transovler can effectively capture intricate physical correlations.
Transolver achieves consistent state-of-the-art with 22% relative gain across six standard benchmarks and also excels in large-scale industrial simulations.
arXiv Detail & Related papers (2024-02-04T06:37:38Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Efficient Neural PDE-Solvers using Quantization Aware Training [71.0934372968972]
We show that quantization can successfully lower the computational cost of inference while maintaining performance.
Our results on four standard PDE datasets and three network architectures show that quantization-aware training works across settings and three orders of FLOPs magnitudes.
arXiv Detail & Related papers (2023-08-14T09:21:19Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Guiding continuous operator learning through Physics-based boundary
constraints [1.5847814664948012]
Boundary conditions (BCs) are physics-enforced constraints necessary for solutions of Partial Differential Equations (PDEs)
Current neural-network based approaches that aim to solve PDEs rely only on training data to help the model learn BCs implicitly.
We propose Boundary enforcing Operator Network (BOON) that enables the BC satisfaction of neural operators by making structural changes to the operator kernel.
arXiv Detail & Related papers (2022-12-14T19:54:46Z) - JAX-DIPS: Neural bootstrapping of finite discretization methods and
application to elliptic problems with discontinuities [0.0]
This strategy can be used to efficiently train neural network surrogate models of partial differential equations.
The presented neural bootstrapping method (hereby dubbed NBM) is based on evaluation of the finite discretization residuals of the PDE system.
We show NBM is competitive in terms of memory and training speed with other PINN-type frameworks.
arXiv Detail & Related papers (2022-10-25T20:13:26Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Solving and Learning Nonlinear PDEs with Gaussian Processes [11.09729362243947]
We introduce a simple, rigorous, and unified framework for solving nonlinear partial differential equations.
The proposed approach provides a natural generalization of collocation kernel methods to nonlinear PDEs and IPs.
For IPs, while the traditional approach has been to iterate between the identifications of parameters in the PDE and the numerical approximation of its solution, our algorithm tackles both simultaneously.
arXiv Detail & Related papers (2021-03-24T03:16:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.