Enhancement of shock-capturing methods via machine learning
- URL: http://arxiv.org/abs/2002.02521v1
- Date: Thu, 6 Feb 2020 21:51:39 GMT
- Title: Enhancement of shock-capturing methods via machine learning
- Authors: Ben Stevens, Tim Colonius
- Abstract summary: We develop an improved finite-volume method for simulating PDEs with discontinuous solutions.
We train a neural network to improve the results of a fifth-order WENO method.
We find that our method outperforms WENO in simulations where the numerical solution becomes overly diffused.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, machine learning has been used to create data-driven
solutions to problems for which an algorithmic solution is intractable, as well
as fine-tuning existing algorithms. This research applies machine learning to
the development of an improved finite-volume method for simulating PDEs with
discontinuous solutions. Shock capturing methods make use of nonlinear
switching functions that are not guaranteed to be optimal. Because data can be
used to learn nonlinear relationships, we train a neural network to improve the
results of a fifth-order WENO method. We post-process the outputs of the neural
network to guarantee that the method is consistent. The training data consists
of the exact mapping between cell averages and interpolated values for a set of
integrable functions that represent waveforms we would expect to see while
simulating a PDE. We demonstrate our method on linear advection of a
discontinuous function, the inviscid Burgers' equation, and the 1-D Euler
equations. For the latter, we examine the Shu-Osher model problem for
turbulence-shockwave interactions. We find that our method outperforms WENO in
simulations where the numerical solution becomes overly diffused due to
numerical viscosity.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - KAN/MultKAN with Physics-Informed Spline fitting (KAN-PISF) for ordinary/partial differential equation discovery of nonlinear dynamic systems [0.0]
There is a dire need to interpret the machine learning models to develop a physical understanding of dynamic systems.
In this study, an equation discovery framework is proposed that includes i) sequentially regularized derivatives for denoising (SRDD) algorithm to denoise the measure data, ii) KAN to identify the equation structure and suggest relevant nonlinear functions.
arXiv Detail & Related papers (2024-11-18T18:14:51Z) - Coupling Machine Learning Local Predictions with a Computational Fluid Dynamics Solver to Accelerate Transient Buoyant Plume Simulations [0.0]
This study presents a versatile and scalable hybrid methodology, combining CFD and machine learning.
The objective was to leverage local features to predict the temporal changes in the pressure field in comparable scenarios.
Pressure estimates were employed as initial values to accelerate the pressure-velocity coupling procedure.
arXiv Detail & Related papers (2024-09-11T10:38:30Z) - Diffusion-Generative Multi-Fidelity Learning for Physical Simulation [24.723536390322582]
We develop a diffusion-generative multi-fidelity learning method based on differential equations (SDE), where the generation is a continuous denoising process.
By conditioning on additional inputs (temporal or spacial variables), our model can efficiently learn and predict multi-dimensional solution arrays.
arXiv Detail & Related papers (2023-11-09T18:59:05Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Numerical Approximation in CFD Problems Using Physics Informed Machine
Learning [0.0]
The thesis focuses on various techniques to find an alternate approximation method that could be universally used for a wide range of CFD problems.
The focus stays over physics informed machine learning techniques where solving differential equations is possible without any training with computed data.
The extreme learning machine (ELM) is a very fast neural network algorithm at the cost of tunable parameters.
arXiv Detail & Related papers (2021-11-01T22:54:51Z) - Feature Engineering with Regularity Structures [4.082216579462797]
We investigate the use of models from the theory of regularity structures as features in machine learning tasks.
We provide a flexible definition of a model feature vector associated to a space-time signal, along with two algorithms which illustrate ways in which these features can be combined with linear regression.
We apply these algorithms in several numerical experiments designed to learn solutions to PDEs with a given forcing and boundary data.
arXiv Detail & Related papers (2021-08-12T17:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.