The Finite Element Neural Network Method: One Dimensional Study
- URL: http://arxiv.org/abs/2501.12508v1
- Date: Tue, 21 Jan 2025 21:39:56 GMT
- Title: The Finite Element Neural Network Method: One Dimensional Study
- Authors: Mohammed Abda, Elsa Piollet, Christopher Blake, Frédérick P. Gosselin,
- Abstract summary: This research introduces the finite element neural network method (FENNM) within the framework of the Petrov-Galerkin method.
FENNM uses convolution operations to approximate the weighted residual of the differential equations.
This enables the integration of forcing terms and natural boundary conditions into the loss function similar to conventional finite element method (FEM) solvers.
- Score: 0.0
- License:
- Abstract: The potential of neural networks (NN) in engineering is rooted in their capacity to understand intricate patterns and complex systems, leveraging their universal nonlinear approximation capabilities and high expressivity. Meanwhile, conventional numerical methods, backed by years of meticulous refinement, continue to be the standard for accuracy and dependability. Bridging these paradigms, this research introduces the finite element neural network method (FENNM) within the framework of the Petrov-Galerkin method using convolution operations to approximate the weighted residual of the differential equations. The NN generates the global trial solution, while the test functions belong to the Lagrange test function space. FENNM introduces several key advantages. Notably, the weak-form of the differential equations introduces flux terms that contribute information to the loss function compared to VPINN, hp-VPINN, and cv-PINN. This enables the integration of forcing terms and natural boundary conditions into the loss function similar to conventional finite element method (FEM) solvers, facilitating its optimization, and extending its applicability to more complex problems, which will ease industrial adoption. This study will elaborate on the derivation of FENNM, highlighting its similarities with FEM. Additionally, it will provide insights into optimal utilization strategies and user guidelines to ensure cost-efficiency. Finally, the study illustrates the robustness and accuracy of FENNM by presenting multiple numerical case studies and applying adaptive mesh refinement techniques.
Related papers
- FBSJNN: A Theoretically Interpretable and Efficiently Deep Learning method for Solving Partial Integro-Differential Equations [0.0]
We propose a novel framework for solving a class of Partial Integro-Differential Equations (PIDEs) through a deep learning-based approach.
This method, termed the Forward-Backward Jump Neural Network (FBNN), is both theoretically interpretable and numerically effective.
Numerical experiments indicate that the FBSJNN scheme can obtain numerical solutions with a relative error on the scale of $10-3$.
arXiv Detail & Related papers (2024-12-15T01:37:48Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Differentiable Neural-Integrated Meshfree Method for Forward and Inverse Modeling of Finite Strain Hyperelasticity [1.290382979353427]
The present study aims to extend the novel physics-informed machine learning approach, specifically the neural-integrated meshfree (NIM) method, to model finite-strain problems.
Thanks to the inherent differentiable programming capabilities, NIM can circumvent the need for derivation of Newton-Raphson linearization of the variational form.
NIM is applied to identify heterogeneous mechanical properties of hyperelastic materials from strain data, validating its effectiveness in the inverse modeling of nonlinear materials.
arXiv Detail & Related papers (2024-07-15T19:15:18Z) - Enriched Physics-informed Neural Networks for Dynamic
Poisson-Nernst-Planck Systems [0.8192907805418583]
This paper proposes a meshless deep learning algorithm, enriched physics-informed neural networks (EPINNs) to solve dynamic Poisson-Nernst-Planck (PNP) equations.
The EPINNs takes the traditional physics-informed neural networks as the foundation framework, and adds the adaptive loss weight to balance the loss functions.
Numerical results indicate that the new method has better applicability than traditional numerical methods in solving such coupled nonlinear systems.
arXiv Detail & Related papers (2024-02-01T02:57:07Z) - N-Adaptive Ritz Method: A Neural Network Enriched Partition of Unity for
Boundary Value Problems [1.2200609701777907]
This work introduces a novel neural network-enriched Partition of Unity (NN-PU) approach for solving boundary value problems via artificial neural networks.
The NN enrichment is constructed by combining pre-trained feature-encoded NN blocks with an untrained NN block.
The proposed method offers accurate solutions while notably reducing the computational cost compared to the conventional adaptive refinement in the mesh-based methods.
arXiv Detail & Related papers (2024-01-16T18:11:14Z) - Neural-Integrated Meshfree (NIM) Method: A differentiable
programming-based hybrid solver for computational mechanics [1.7132914341329852]
We present the neural-integrated meshfree (NIM) method, a differentiable programming-based hybrid meshfree approach within the field of computational mechanics.
NIM seamlessly integrates traditional physics-based meshfree discretization techniques with deep learning architectures.
Under the NIM framework, we propose two truly meshfree solvers: the strong form-based NIM (S-NIM) and the local variational form-based NIM (V-NIM)
arXiv Detail & Related papers (2023-11-21T17:57:12Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors.
LFP decomposes a reward to individual neurons based on their respective contributions to solving a given task.
Our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory [79.50644650795012]
We propose a deep learning approach to solve Kohn-Sham Density Functional Theory (KS-DFT)
We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity.
In addition, we show that our approach enables us to explore more complex neural-based wave functions.
arXiv Detail & Related papers (2023-03-01T10:38:10Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.