Applications of Scientific Machine Learning for the Analysis of Functionally Graded Porous Beams
- URL: http://arxiv.org/abs/2408.02698v2
- Date: Tue, 24 Dec 2024 10:45:47 GMT
- Title: Applications of Scientific Machine Learning for the Analysis of Functionally Graded Porous Beams
- Authors: Mohammad Sadegh Eshaghi, Mostafa Bamdad, Cosmin Anitescu, Yizheng Wang, Xiaoying Zhuang, Timon Rabczuk,
- Abstract summary: This study investigates different Scientific Machine Learning (SciML) approaches for the analysis of functionally graded (FG) porous beams.
The methods consider the output of a neural network/operator as an approximation to the displacement fields.
A neural operator has been trained to predict the response of the porous beam with functionally graded material.
- Score: 0.0
- License:
- Abstract: This study investigates different Scientific Machine Learning (SciML) approaches for the analysis of functionally graded (FG) porous beams and compares them under a new framework. The beam material properties are assumed to vary as an arbitrary continuous function. The methods consider the output of a neural network/operator as an approximation to the displacement fields and derive the equations governing beam behavior based on the continuum formulation. The methods are implemented in the framework and formulated by three approaches: (a) the vector approach leads to a Physics-Informed Neural Network (PINN), (b) the energy approach brings about the Deep Energy Method (DEM), and (c) the data-driven approach, which results in a class of Neural Operator methods. Finally, a neural operator has been trained to predict the response of the porous beam with functionally graded material under any porosity distribution pattern and any arbitrary traction condition. The results are validated with analytical and numerical reference solutions. The data and code accompanying this manuscript will be publicly available at https://github.com/eshaghi-ms/DeepNetBeam.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce LUNO, a novel framework for approximate Bayesian uncertainty quantification in trained neural operators.
Our approach leverages model linearization to push (Gaussian) weight-space uncertainty forward to the neural operator's predictions.
We show that this can be interpreted as a probabilistic version of the concept of currying from functional programming, yielding a function-valued (Gaussian) random process belief.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Physics and geometry informed neural operator network with application to acoustic scattering [0.0]
We propose a physics-informed deep operator network (DeepONet) capable of predicting the scattered pressure field for arbitrarily shaped scatterers.
Our trained model is capable of learning solution operator that can approximate physically-consistent scattered pressure field in just a few seconds.
arXiv Detail & Related papers (2024-06-02T03:41:52Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Score-based Diffusion Models in Function Space [137.70916238028306]
Diffusion models have recently emerged as a powerful framework for generative modeling.
This work introduces a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Stochastic analysis of heterogeneous porous material with modified
neural architecture search (NAS) based physics-informed neural networks using
transfer learning [0.0]
modified neural architecture search method (NAS) based physics-informed deep learning model is presented.
A three dimensional flow model is built to provide a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers.
arXiv Detail & Related papers (2020-10-03T19:57:54Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Deep Learning with Functional Inputs [0.0]
We present a methodology for integrating functional data into feed-forward neural networks.
A by-product of the method is a set of dynamic functional weights that can be visualized during the optimization process.
The model is shown to perform well in a number of contexts including prediction of new data and recovery of the true underlying functional weights.
arXiv Detail & Related papers (2020-06-17T01:23:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.