N-Adaptive Ritz Method: A Neural Network Enriched Partition of Unity for
Boundary Value Problems
- URL: http://arxiv.org/abs/2401.08544v1
- Date: Tue, 16 Jan 2024 18:11:14 GMT
- Title: N-Adaptive Ritz Method: A Neural Network Enriched Partition of Unity for
Boundary Value Problems
- Authors: Jonghyuk Baek and Yanran Wang and J. S. Chen
- Abstract summary: This work introduces a novel neural network-enriched Partition of Unity (NN-PU) approach for solving boundary value problems via artificial neural networks.
The NN enrichment is constructed by combining pre-trained feature-encoded NN blocks with an untrained NN block.
The proposed method offers accurate solutions while notably reducing the computational cost compared to the conventional adaptive refinement in the mesh-based methods.
- Score: 1.2200609701777907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional finite element methods are known to be tedious in adaptive
refinements due to their conformal regularity requirements. Further, the
enrichment functions for adaptive refinements are often not readily available
in general applications. This work introduces a novel neural network-enriched
Partition of Unity (NN-PU) approach for solving boundary value problems via
artificial neural networks with a potential energy-based loss function
minimization. The flexibility and adaptivity of the NN function space are
utilized to capture complex solution patterns that the conventional Galerkin
methods fail to capture. The NN enrichment is constructed by combining
pre-trained feature-encoded NN blocks with an additional untrained NN block.
The pre-trained NN blocks learn specific local features during the offline
stage, enabling efficient enrichment of the approximation space during the
online stage through the Ritz-type energy minimization. The NN enrichment is
introduced under the Partition of Unity (PU) framework, ensuring convergence of
the proposed method. The proposed NN-PU approximation and feature-encoded
transfer learning forms an adaptive approximation framework, termed the
neural-refinement (n-refinement), for solving boundary value problems.
Demonstrated by solving various elasticity problems, the proposed method offers
accurate solutions while notably reducing the computational cost compared to
the conventional adaptive refinement in the mesh-based methods.
Related papers
- RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - A Neural Network-Based Enrichment of Reproducing Kernel Approximation
for Modeling Brittle Fracture [0.0]
An improved version of the neural network-enhanced Reproducing Kernel Particle Method (NN-RKPM) is proposed for modeling brittle fracture.
The effectiveness of the proposed method is demonstrated by a series of numerical examples involving damage propagation and branching.
arXiv Detail & Related papers (2023-07-04T21:52:09Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - A Neural Network-enhanced Reproducing Kernel Particle Method for
Modeling Strain Localization [0.0]
In this work, neural network-enhanced reproducing kernel particle method (NN-RKPM) is proposed.
The location, orientation, and shape of the solution transition near a localization is automatically captured by the NN approximation.
The effectiveness of the proposed NN-RKPM is verified by a series of numerical verifications.
arXiv Detail & Related papers (2022-04-28T23:59:38Z) - Physics and Equality Constrained Artificial Neural Networks: Application
to Partial Differential Equations [1.370633147306388]
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE)
Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach.
We propose a versatile framework that can tackle both inverse and forward problems.
arXiv Detail & Related papers (2021-09-30T05:55:35Z) - Self-adaptive deep neural network: Numerical approximation to functions
and PDEs [3.6525914200522656]
We introduce a self-adaptive algorithm for designing an optimal deep neural network for a given task.
The ANE method is written as loops of the form train, estimate and enhance.
We demonstrate that the ANE method can automatically design a nearly minimal NN for learning functions exhibiting sharp transitional layers.
arXiv Detail & Related papers (2021-09-07T03:16:57Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - PFNN: A Penalty-Free Neural Network Method for Solving a Class of
Second-Order Boundary-Value Problems on Complex Geometries [4.620110353542715]
We present PFNN, a penalty-free neural network method, to solve a class of second-order boundary-value problems.
PFNN is superior to several existing approaches in terms of accuracy, flexibility and robustness.
arXiv Detail & Related papers (2020-04-14T13:36:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.