Enhancement of shock-capturing methods via machine learning
- URL: http://arxiv.org/abs/2002.02521v1
- Date: Thu, 6 Feb 2020 21:51:39 GMT
- Title: Enhancement of shock-capturing methods via machine learning
- Authors: Ben Stevens, Tim Colonius
- Abstract summary: We develop an improved finite-volume method for simulating PDEs with discontinuous solutions.
We train a neural network to improve the results of a fifth-order WENO method.
We find that our method outperforms WENO in simulations where the numerical solution becomes overly diffused.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, machine learning has been used to create data-driven
solutions to problems for which an algorithmic solution is intractable, as well
as fine-tuning existing algorithms. This research applies machine learning to
the development of an improved finite-volume method for simulating PDEs with
discontinuous solutions. Shock capturing methods make use of nonlinear
switching functions that are not guaranteed to be optimal. Because data can be
used to learn nonlinear relationships, we train a neural network to improve the
results of a fifth-order WENO method. We post-process the outputs of the neural
network to guarantee that the method is consistent. The training data consists
of the exact mapping between cell averages and interpolated values for a set of
integrable functions that represent waveforms we would expect to see while
simulating a PDE. We demonstrate our method on linear advection of a
discontinuous function, the inviscid Burgers' equation, and the 1-D Euler
equations. For the latter, we examine the Shu-Osher model problem for
turbulence-shockwave interactions. We find that our method outperforms WENO in
simulations where the numerical solution becomes overly diffused due to
numerical viscosity.
Related papers
- Coupling Machine Learning Local Predictions with a Computational Fluid Dynamics Solver to Accelerate Transient Buoyant Plume Simulations [0.0]
This study presents a versatile and scalable hybrid methodology, combining CFD and machine learning.
The objective was to leverage local features to predict the temporal changes in the pressure field in comparable scenarios.
Pressure estimates were employed as initial values to accelerate the pressure-velocity coupling procedure.
arXiv Detail & Related papers (2024-09-11T10:38:30Z) - Diffusion-Generative Multi-Fidelity Learning for Physical Simulation [24.723536390322582]
We develop a diffusion-generative multi-fidelity learning method based on differential equations (SDE), where the generation is a continuous denoising process.
By conditioning on additional inputs (temporal or spacial variables), our model can efficiently learn and predict multi-dimensional solution arrays.
arXiv Detail & Related papers (2023-11-09T18:59:05Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Physics-informed Neural Networks approach to solve the Blasius function [0.0]
This paper presents a physics-informed neural network (PINN) approach to solve the Blasius function.
It is seen that this method produces results that are at par with the numerical and conventional methods.
arXiv Detail & Related papers (2022-12-31T03:14:42Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Numerical Approximation in CFD Problems Using Physics Informed Machine
Learning [0.0]
The thesis focuses on various techniques to find an alternate approximation method that could be universally used for a wide range of CFD problems.
The focus stays over physics informed machine learning techniques where solving differential equations is possible without any training with computed data.
The extreme learning machine (ELM) is a very fast neural network algorithm at the cost of tunable parameters.
arXiv Detail & Related papers (2021-11-01T22:54:51Z) - Feature Engineering with Regularity Structures [4.082216579462797]
We investigate the use of models from the theory of regularity structures as features in machine learning tasks.
We provide a flexible definition of a model feature vector associated to a space-time signal, along with two algorithms which illustrate ways in which these features can be combined with linear regression.
We apply these algorithms in several numerical experiments designed to learn solutions to PDEs with a given forcing and boundary data.
arXiv Detail & Related papers (2021-08-12T17:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.