Comparison of Neural FEM and Neural Operator Methods for applications in
Solid Mechanics
- URL: http://arxiv.org/abs/2307.02494v1
- Date: Tue, 4 Jul 2023 06:16:43 GMT
- Title: Comparison of Neural FEM and Neural Operator Methods for applications in
Solid Mechanics
- Authors: Stefan Hildebrand, Sandra Klinge
- Abstract summary: The current work investigates two classes, Neural FEM and Neural Operator Methods, for the use in elastostatics by means of numerical experiments.
Main differences between the two classes are the computational effort and accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine Learning methods belong to the group of most up-to-date approaches
for solving partial differential equations. The current work investigates two
classes, Neural FEM and Neural Operator Methods, for the use in elastostatics
by means of numerical experiments. The Neural Operator methods require
expensive training but then allow for solving multiple boundary value problems
with the same Machine Learning model. Main differences between the two classes
are the computational effort and accuracy. Especially the accuracy requires
more research for practical applications.
Related papers
- Principled Approaches for Extending Neural Architectures to Function Spaces for Operator Learning [78.88684753303794]
Deep learning has predominantly advanced through applications in computer vision and natural language processing.<n>Neural operators are a principled way to generalize neural networks to mappings between function spaces.<n>This paper identifies and distills the key principles for constructing practical implementations of mappings between infinite-dimensional function spaces.
arXiv Detail & Related papers (2025-06-12T17:59:31Z) - Multi-Level Monte Carlo Training of Neural Operators [16.643463851493618]
Operator learning aims to approximate nonlinear operators related to partial differential equations (PDEs) using neural operators.<n>These rely on discretization of input and output functions and are, usually, expensive to train for large-scale problems at high-resolution.<n>Motivated by this, we present a Multi-Level Monte Carlo (MLMC) approach to train neural gradient operators by leveraging a hierarchy of resolutions of function dicretization.
arXiv Detail & Related papers (2025-05-19T10:26:28Z) - Towards Gaussian Process for operator learning: an uncertainty aware resolution independent operator learning algorithm for computational mechanics [8.528817025440746]
This paper introduces a novel Gaussian Process (GP) based neural operator for solving parametric differential equations.
We propose a neural operator-embedded kernel'' wherein the GP kernel is formulated in the latent space learned using a neural operator.
Our results highlight the efficacy of this framework in solving complex PDEs while maintaining robustness in uncertainty estimation.
arXiv Detail & Related papers (2024-09-17T08:12:38Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - What to Do When Your Discrete Optimization Is the Size of a Neural
Network? [24.546550334179486]
Machine learning applications using neural networks involve solving discrete optimization problems.
classical approaches used in discrete settings do not scale well to large neural networks.
We take continuation path (CP) methods to represent using purely the former and Monte Carlo (MC) methods to represent the latter.
arXiv Detail & Related papers (2024-02-15T21:57:43Z) - PICL: Physics Informed Contrastive Learning for Partial Differential Equations [7.136205674624813]
We develop a novel contrastive pretraining framework that improves neural operator generalization across multiple governing equations simultaneously.
A combination of physics-informed system evolution and latent-space model output are anchored to input data and used in our distance function.
We find that physics-informed contrastive pretraining improves accuracy for the Fourier Neural Operator in fixed-future and autoregressive rollout tasks for the 1D and 2D Heat, Burgers', and linear advection equations.
arXiv Detail & Related papers (2024-01-29T17:32:22Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - Hyena Neural Operator for Partial Differential Equations [9.438207505148947]
Recent advances in deep learning have provided a new approach to solving partial differential equations that involves the use of neural operators.
This study utilizes a neural operator called Hyena, which employs a long convolutional filter that is parameterized by a multilayer perceptron.
Our findings indicate Hyena can serve as an efficient and accurate model for partial learning differential equations solution operator.
arXiv Detail & Related papers (2023-06-28T19:45:45Z) - Neural Operator: Is data all you need to model the world? An insight
into the impact of Physics Informed Machine Learning [13.050410285352605]
We provide an insight into how data-driven approaches can complement conventional techniques to solve engineering and physics problems.
We highlight a novel and fast machine learning-based approach to learning the solution operator of a PDE operator learning.
arXiv Detail & Related papers (2023-01-30T23:29:33Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.