Applying physics-based loss functions to neural networks for improved
generalizability in mechanics problems
- URL: http://arxiv.org/abs/2105.00075v1
- Date: Fri, 30 Apr 2021 20:31:09 GMT
- Title: Applying physics-based loss functions to neural networks for improved
generalizability in mechanics problems
- Authors: Samuel J. Raymond and David B. Camarillo
- Abstract summary: Informed Machine Learning (PIML) has gained momentum in the last 5 years with scientists and researchers to utilize the benefits afforded by advances in machine learning.
In this work a new approach to utilizing PIML is discussed that deals with the use of physics-based loss functions.
- Score: 3.655021726150368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-Informed Machine Learning (PIML) has gained momentum in the last 5
years with scientists and researchers aiming to utilize the benefits afforded
by advances in machine learning, particularly in deep learning. With large
scientific data sets with rich spatio-temporal data and high-performance
computing providing large amounts of data to be inferred and interpreted, the
task of PIML is to ensure that these predictions, categorizations, and
inferences are enforced by, and conform to the limits imposed by physical laws.
In this work a new approach to utilizing PIML is discussed that deals with the
use of physics-based loss functions. While typical usage of physical equations
in the loss function requires complex layers of derivatives and other functions
to ensure that the known governing equation is satisfied, here we show that a
similar level of enforcement can be found by implementing more simpler loss
functions on specific kinds of output data. The generalizability that this
approach affords is shown using examples of simple mechanical models that can
be thought of as sufficiently simplified surrogate models for a wide class of
problems.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Physics-Informed Weakly Supervised Learning for Interatomic Potentials [17.165117198519248]
We introduce a physics-informed, weakly supervised approach for training machine-learned interatomic potentials.
We demonstrate reduced energy and force errors -- often lower by a factor of two -- for various baseline models and benchmark data sets.
arXiv Detail & Related papers (2024-07-23T12:49:04Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Neural oscillators for generalization of physics-informed machine
learning [1.893909284526711]
A primary challenge of physics-informed machine learning (PIML) is its generalization beyond the training domain.
This paper aims to enhance the generalization capabilities of PIML, facilitating practical, real-world applications.
We leverage the inherent causality and temporal sequential characteristics of PDE solutions to fuse PIML models with recurrent neural architectures.
arXiv Detail & Related papers (2023-08-17T13:50:03Z) - On the Integration of Physics-Based Machine Learning with Hierarchical
Bayesian Modeling Techniques [0.0]
This paper proposes to embed mechanics-based models into the mean function of a Gaussian Process (GP) model and characterize potential discrepancies through kernel machines.
The stationarity of the kernel function is a difficult hurdle in the sequential processing of long data sets, resolved through hierarchical Bayesian techniques.
Using numerical and experimental examples, potential applications of the proposed method to structural dynamics inverse problems are demonstrated.
arXiv Detail & Related papers (2023-03-01T02:29:41Z) - Neural Operator: Is data all you need to model the world? An insight
into the impact of Physics Informed Machine Learning [13.050410285352605]
We provide an insight into how data-driven approaches can complement conventional techniques to solve engineering and physics problems.
We highlight a novel and fast machine learning-based approach to learning the solution operator of a PDE operator learning.
arXiv Detail & Related papers (2023-01-30T23:29:33Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Physics-Guided Problem Decomposition for Scaling Deep Learning of
High-dimensional Eigen-Solvers: The Case of Schr\"{o}dinger's Equation [8.80823317679047]
Deep neural networks (NNs) have been proposed as a viable alternative to traditional simulation-driven approaches for solving high-dimensional eigenvalue equations.
In this paper, we use physics knowledge to decompose the complex regression task of predicting the high-dimensional eigenvectors into simpler sub-tasks.
We demonstrate the efficacy of such physics-guided problem decomposition for the case of the Schr"odinger's Equation in Quantum Mechanics.
arXiv Detail & Related papers (2022-02-12T05:59:08Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.