RoeNets: Predicting Discontinuity of Hyperbolic Systems from Continuous
Data
- URL: http://arxiv.org/abs/2006.04180v1
- Date: Sun, 7 Jun 2020 15:28:00 GMT
- Title: RoeNets: Predicting Discontinuity of Hyperbolic Systems from Continuous
Data
- Authors: Shiying Xiong, Xingzhe He, Yunjin Tong, Runze Liu, and Bo Zhu
- Abstract summary: We introduce Roe Neural Networks (RoeNets) that can predict the discontinuity of the hyperbolic conservation laws (HCLs) based on short-term discontinuous and even continuous training data.
- Score: 17.38092910172857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Roe Neural Networks (RoeNets) that can predict the discontinuity
of the hyperbolic conservation laws (HCLs) based on short-term discontinuous
and even continuous training data. Our methodology is inspired by Roe
approximate Riemann solver (P. L. Roe, J. Comput. Phys., vol. 43, 1981, pp.
357--372), which is one of the most fundamental HCLs numerical solvers. In
order to accurately solve the HCLs, Roe argues the need to construct a Roe
matrix that fulfills "Property U", including diagonalizable with real
eigenvalues, consistent with the exact Jacobian, and preserving conserved
quantities. However, the construction of such matrix cannot be achieved by any
general numerical method. Our model made a breakthrough improvement in solving
the HCLs by applying Roe solver under a neural network perspective. To enhance
the expressiveness of our model, we incorporate pseudoinverses into a novel
context to enable a hidden dimension so that we are flexible with the number of
parameters. The ability of our model to predict long-term discontinuity from a
short window of continuous training data is in general considered impossible
using traditional machine learning approaches. We demonstrate that our model
can generate highly accurate predictions of evolution of convection without
dissipation and the discontinuity of hyperbolic systems from smooth training
data.
Related papers
- A TVD neural network closure and application to turbulent combustion [1.374949083138427]
Trained neural networks (NN) have attractive features for closing governing equations, but they can stray from physical reality.
A NN formulation is introduced to preclude spurious oscillations that violate solution boundedness or positivity.
It is embedded in the discretized equations as a machine learning closure and strictly constrained.
arXiv Detail & Related papers (2024-08-06T19:22:13Z) - Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Explainable Parallel RCNN with Novel Feature Representation for Time
Series Forecasting [0.0]
Time series forecasting is a fundamental challenge in data science.
We develop a parallel deep learning framework composed of RNN and CNN.
Extensive experiments on three datasets reveal the effectiveness of our method.
arXiv Detail & Related papers (2023-05-08T17:20:13Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Infinite-Fidelity Coregionalization for Physical Simulation [22.524773932668023]
Multi-fidelity modeling and learning are important in physical simulation-related applications.
We propose Infinite Fidelity Coregionalization (IFC) to exploit rich information within continuous, infinite fidelities.
We show the advantage of our method in several benchmark tasks in computational physics.
arXiv Detail & Related papers (2022-07-01T23:01:10Z) - Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method [0.0]
Non-affine parametric dependencies, nonlinearities and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay.
We implement the non-linear manifold method introduced by Carlberg et al [37] with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder.
We test the methodology on a 2d non-linear conservation law and a 2d shallow water models, and compare the results obtained with a purely data-driven method for which the dynamics is evolved in time with a long-short term memory network
arXiv Detail & Related papers (2022-03-01T11:16:50Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.