Finite Element Method-enhanced Neural Network for Forward and Inverse
Problems
- URL: http://arxiv.org/abs/2205.08321v1
- Date: Tue, 17 May 2022 13:18:14 GMT
- Title: Finite Element Method-enhanced Neural Network for Forward and Inverse
Problems
- Authors: Rishith Ellath Meethal, Birgit Obst, Mohamed Khalil, Aditya
Ghantasala, Anoop Kodakkal, Kai-Uwe Bletzinger, Roland W\"uchner
- Abstract summary: We introduce a novel hybrid methodology combining classical finite element methods (FEM) with neural networks.
The residual from finite element methods and custom loss functions from neural networks are merged to form the algorithm.
The proposed methodology can be used for surrogate models in real-time simulation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel hybrid methodology combining classical finite element
methods (FEM) with neural networks to create a well-performing and
generalizable surrogate model for forward and inverse problems. The residual
from finite element methods and custom loss functions from neural networks are
merged to form the algorithm. The Finite Element Method-enhanced Neural Network
hybrid model (FEM-NN hybrid) is data-efficient and physics conforming. The
proposed methodology can be used for surrogate models in real-time simulation,
uncertainty quantification, and optimization in the case of forward problems.
It can be used for updating the models in the case of inverse problems. The
method is demonstrated with examples, and the accuracy of the results and
performance is compared against the conventional way of network training and
the classical finite element method. An application of the forward-solving
algorithm is demonstrated for the uncertainty quantification of wind effects on
a high-rise buildings. The inverse algorithm is demonstrated in the
speed-dependent bearing coefficient identification of fluid bearings. The
hybrid methodology of this kind will serve as a paradigm shift in the
simulation methods currently used.
Related papers
- Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Hybrid FEM-NN models: Combining artificial neural networks with the
finite element method [0.0]
We present a methodology combining neural networks with physical principle constraints in the form of partial differential equations (PDEs)
The approach allows to train neural networks while respecting the PDEs as a strong constraint in the optimisation as apposed to making them part of the loss function.
We demonstrate the method on a complex cardiac cell model problem using deep neural networks.
arXiv Detail & Related papers (2021-01-04T13:36:06Z) - Deep-Learning based Inverse Modeling Approaches: A Subsurface Flow
Example [0.0]
Theory-guided Neural Network (TgNN) is constructed as a deep-learning surrogate for problems with uncertain model parameters.
Direct-deep-learning-inversion methods, in which TgNN constrained with geostatistical information, is proposed for direct inverse modeling.
arXiv Detail & Related papers (2020-07-28T15:31:07Z) - Hyperspectral Unmixing Network Inspired by Unfolding an Optimization
Problem [2.4016406737205753]
The hyperspectral image (HSI) unmixing task is essentially an inverse problem, which is commonly solved by optimization algorithms.
We propose two novel network architectures, named U-ADMM-AENet and U-ADMM-BUNet, for abundance estimation and blind unmixing.
We show that the unfolded structures can find corresponding interpretations in machine learning literature, which further demonstrates the effectiveness of proposed methods.
arXiv Detail & Related papers (2020-05-21T18:49:45Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.