Solving the Discretised Multiphase Flow Equations with Interface
Capturing on Structured Grids Using Machine Learning Libraries
- URL: http://arxiv.org/abs/2401.06755v2
- Date: Sun, 3 Mar 2024 17:46:05 GMT
- Title: Solving the Discretised Multiphase Flow Equations with Interface
Capturing on Structured Grids Using Machine Learning Libraries
- Authors: Boyang Chen, Claire E. Heaney, Jefferson L. M. A. Gomes, Omar K.
Matar, Christopher C. Pain
- Abstract summary: This paper solves the discretised multiphase flow equations using tools and methods from machine-learning libraries.
For the first time, finite element discretisations of multiphase flows can be solved using an approach based on (untrained) convolutional neural networks.
- Score: 0.6299766708197884
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper solves the discretised multiphase flow equations using tools and
methods from machine-learning libraries. The idea comes from the observation
that convolutional layers can be used to express a discretisation as a neural
network whose weights are determined by the numerical method, rather than by
training, and hence, we refer to this approach as Neural Networks for PDEs
(NN4PDEs). To solve the discretised multiphase flow equations, a multigrid
solver is implemented through a convolutional neural network with a U-Net
architecture. Immiscible two-phase flow is modelled by the 3D incompressible
Navier-Stokes equations with surface tension and advection of a volume fraction
field, which describes the interface between the fluids. A new compressive
algebraic volume-of-fluids method is introduced, based on a residual
formulation using Petrov-Galerkin for accuracy and designed with NN4PDEs in
mind. High-order finite-element based schemes are chosen to model a collapsing
water column and a rising bubble. Results compare well with experimental data
and other numerical results from the literature, demonstrating that, for the
first time, finite element discretisations of multiphase flows can be solved
using an approach based on (untrained) convolutional neural networks. A benefit
of expressing numerical discretisations as neural networks is that the code can
run, without modification, on CPUs, GPUs or the latest accelerators designed
especially to run AI codes.
Related papers
- Learning-based Multi-continuum Model for Multiscale Flow Problems [24.93423649301792]
We propose a learning-based multi-continuum model to enrich the homogenized equation and improve the accuracy of the single model for multiscale problems.
Our proposed learning-based multi-continuum model can resolve multiple interacted media within each coarse grid block and describe the mass transfer among them.
arXiv Detail & Related papers (2024-03-21T02:30:56Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - SPINN: Sparse, Physics-based, and Interpretable Neural Networks for PDEs [0.0]
We introduce a class of Sparse, Physics-based, and Interpretable Neural Networks (SPINN) for solving ordinary and partial differential equations.
By reinterpreting a traditional meshless representation of solutions of PDEs as a special sparse deep neural network, we develop a class of sparse neural network architectures that are interpretable.
arXiv Detail & Related papers (2021-02-25T17:45:50Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.