Neural Networks-based Random Vortex Methods for Modelling Incompressible Flows
- URL: http://arxiv.org/abs/2405.13691v1
- Date: Wed, 22 May 2024 14:36:23 GMT
- Title: Neural Networks-based Random Vortex Methods for Modelling Incompressible Flows
- Authors: Vladislav Cherepanov, Sebastian W. Ertel,
- Abstract summary: We introduce a novel Neural Networks-based approach for approximating solutions to the (2D) incompressible Navier--Stokes equations.
Our algorithm uses a Physics-informed Neural Network, that approximates the vorticity based on a loss function that uses a computationally efficient formulation of the Random Vortex dynamics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we introduce a novel Neural Networks-based approach for approximating solutions to the (2D) incompressible Navier--Stokes equations. Our algorithm uses a Physics-informed Neural Network, that approximates the vorticity based on a loss function that uses a computationally efficient formulation of the Random Vortex dynamics. The neural vorticity estimator is then combined with traditional numerical PDE-solvers for the Poisson equation to compute the velocity field. The main advantage of our method compared to standard Physics-informed Neural Networks is that it strictly enforces physical properties, such as incompressibility or boundary conditions, which might otherwise be hard to guarantee with purely Neural Networks-based approaches.
Related papers
- Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Enriched Physics-informed Neural Networks for Dynamic
Poisson-Nernst-Planck Systems [0.8192907805418583]
This paper proposes a meshless deep learning algorithm, enriched physics-informed neural networks (EPINNs) to solve dynamic Poisson-Nernst-Planck (PNP) equations.
The EPINNs takes the traditional physics-informed neural networks as the foundation framework, and adds the adaptive loss weight to balance the loss functions.
Numerical results indicate that the new method has better applicability than traditional numerical methods in solving such coupled nonlinear systems.
arXiv Detail & Related papers (2024-02-01T02:57:07Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Physics-Informed Neural Network Method for Solving One-Dimensional
Advection Equation Using PyTorch [0.0]
PINNs approach allows training neural networks while respecting the PDEs as a strong constraint in the optimization.
In standard small-scale circulation simulations, it is shown that the conventional approach incorporates a pseudo diffusive effect that is almost as large as the effect of the turbulent diffusion model.
Of all the schemes tested, only the PINNs approximation accurately predicted the outcome.
arXiv Detail & Related papers (2021-03-15T05:39:17Z) - Partial Differential Equations is All You Need for Generating Neural Architectures -- A Theory for Physical Artificial Intelligence Systems [40.20472268839781]
We generalize the reaction-diffusion equation in statistical physics, Schr"odinger equation in quantum mechanics, Helmholtz equation in paraxial optics.
We take finite difference method to discretize NPDE for finding numerical solution.
Basic building blocks of deep neural network architecture, including multi-layer perceptron, convolutional neural network and recurrent neural networks, are generated.
arXiv Detail & Related papers (2021-03-10T00:05:46Z) - Neural Vortex Method: from Finite Lagrangian Particles to Infinite
Dimensional Eulerian Dynamics [16.563723810812807]
We propose a novel learning-based framework, the Neural Vortex Method (NVM)
NVM builds a neural-network description of the Lagrangian vortex structures and their interaction dynamics.
By embedding these two networks with a vorticity-to-velocity Poisson solver, we can predict the accurate fluid dynamics.
arXiv Detail & Related papers (2020-06-07T15:12:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.