The Principle of Minimum Pressure Gradient: An Alternative Basis for
Physics-Informed Learning of Incompressible Fluid Mechanics
- URL: http://arxiv.org/abs/2401.07489v1
- Date: Mon, 15 Jan 2024 06:12:22 GMT
- Title: The Principle of Minimum Pressure Gradient: An Alternative Basis for
Physics-Informed Learning of Incompressible Fluid Mechanics
- Authors: Hussam Alhussein, Mohammed Daqaq
- Abstract summary: The proposed approach uses the principle of minimum pressure gradient combined with the continuity constraint to train a neural network and predict the flow field in incompressible fluids.
We show that it reduces the computational time per training epoch when compared to the conventional approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent advances in the application of physics-informed learning into the
field of fluid mechanics have been predominantly grounded in the Newtonian
framework, primarly leveraging Navier-Stokes Equation or one of its various
derivative to train a neural network. Here, we propose an alternative approach
based on variational methods. The proposed approach uses the principle of
minimum pressure gradient combined with the continuity constraint to train a
neural network and predict the flow field in incompressible fluids. We describe
the underlying principles of the proposed approach, then use a demonstrative
example to illustrate its implementation and show that it reduces the
computational time per training epoch when compared to the conventional
approach.
Related papers
- Neural Networks-based Random Vortex Methods for Modelling Incompressible Flows [0.0]
We introduce a novel Neural Networks-based approach for approximating solutions to the (2D) incompressible Navier--Stokes equations.
Our algorithm uses a Physics-informed Neural Network, that approximates the vorticity based on a loss function that uses a computationally efficient formulation of the Random Vortex dynamics.
arXiv Detail & Related papers (2024-05-22T14:36:23Z) - Transfer learning for improved generalizability in causal
physics-informed neural networks for beam simulations [1.5654837992353716]
This paper introduces a novel methodology for simulating the dynamics of beams on elastic foundations.
Specifically, Euler-Bernoulli and Timoshenko beam models on the Winkler foundation are simulated using a transfer learning approach.
arXiv Detail & Related papers (2023-11-01T15:19:54Z) - Learning the solution operator of two-dimensional incompressible
Navier-Stokes equations using physics-aware convolutional neural networks [68.8204255655161]
We introduce a technique with which it is possible to learn approximate solutions to the steady-state Navier--Stokes equations in varying geometries without the need of parametrization.
The results of our physics-aware CNN are compared to a state-of-the-art data-based approach.
arXiv Detail & Related papers (2023-08-04T05:09:06Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Physics-informed neural networks for the shallow-water equations on the
sphere [0.0]
Physics-informed neural networks are trained to satisfy the differential equations along with the prescribed initial and boundary data.
We propose a simple multi-model approach to tackle test cases of comparatively long time intervals.
arXiv Detail & Related papers (2021-04-01T16:47:40Z) - A Framework for Fluid Motion Estimation using a Constraint-Based
Refinement Approach [0.0]
We formulate a general framework for fluid motion estimation using a constraint-based refinement approach.
We demonstrate that for a particular choice of constraint, our results closely approximate the classical continuity equation-based method for fluid flow.
We also observe a surprising connection to the Cauchy-Riemann operator that diagonalizes the system leading to a diffusive phenomenon involving the divergence and the curl of the flow.
arXiv Detail & Related papers (2020-11-24T18:23:39Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Physics-based polynomial neural networks for one-shot learning of
dynamical systems from one or a few samples [0.0]
The paper describes practical results on both a simple pendulum and one of the largest worldwide X-ray source.
It is demonstrated in practice that the proposed approach allows recovering complex physics from noisy, limited, and partial observations.
arXiv Detail & Related papers (2020-05-24T09:27:10Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.