Unsupervised Learning Method for the Wave Equation Based on Finite
Difference Residual Constraints Loss
- URL: http://arxiv.org/abs/2401.12489v1
- Date: Tue, 23 Jan 2024 05:06:29 GMT
- Title: Unsupervised Learning Method for the Wave Equation Based on Finite
Difference Residual Constraints Loss
- Authors: Xin Feng, Yi Jiang, Jia-Xian Qin, Lai-Ping Zhang, Xiao-Gang Deng
- Abstract summary: This paper proposes an unsupervised learning method for the wave equation based on finite difference residual constraints.
We construct a novel finite difference residual constraint based on structured grids and finite difference methods, as well as an unsupervised training strategy.
- Score: 8.251460531915997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The wave equation is an important physical partial differential equation, and
in recent years, deep learning has shown promise in accelerating or replacing
traditional numerical methods for solving it. However, existing deep learning
methods suffer from high data acquisition costs, low training efficiency, and
insufficient generalization capability for boundary conditions. To address
these issues, this paper proposes an unsupervised learning method for the wave
equation based on finite difference residual constraints. We construct a novel
finite difference residual constraint based on structured grids and finite
difference methods, as well as an unsupervised training strategy, enabling
convolutional neural networks to train without data and predict the forward
propagation process of waves. Experimental results show that finite difference
residual constraints have advantages over physics-informed neural networks
(PINNs) type physical information constraints, such as easier fitting, lower
computational costs, and stronger source term generalization capability, making
our method more efficient in training and potent in application.
Related papers
- Physics Informed Kolmogorov-Arnold Neural Networks for Dynamical Analysis via Efficent-KAN and WAV-KAN [0.12045539806824918]
We implement the Physics-Informed Kolmogorov-Arnold Neural Networks (PIKAN) through efficient-KAN and WAV-KAN.
PIKAN demonstrates superior performance compared to conventional deep neural networks, achieving the same level of accuracy with fewer layers and reduced computational overhead.
arXiv Detail & Related papers (2024-07-25T20:14:58Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Enriched Physics-informed Neural Networks for Dynamic
Poisson-Nernst-Planck Systems [0.8192907805418583]
This paper proposes a meshless deep learning algorithm, enriched physics-informed neural networks (EPINNs) to solve dynamic Poisson-Nernst-Planck (PNP) equations.
The EPINNs takes the traditional physics-informed neural networks as the foundation framework, and adds the adaptive loss weight to balance the loss functions.
Numerical results indicate that the new method has better applicability than traditional numerical methods in solving such coupled nonlinear systems.
arXiv Detail & Related papers (2024-02-01T02:57:07Z) - An Unsupervised Deep Learning Approach for the Wave Equation Inverse
Problem [12.676629870617337]
Full-waveform inversion (FWI) is a powerful geophysical imaging technique that infers high-resolution subsurface physical parameters.
Due to limitations in observation, limited shots or receivers, and random noise, conventional inversion methods are confronted with numerous challenges.
We provide an unsupervised learning approach aimed at accurately reconstructing physical velocity parameters.
arXiv Detail & Related papers (2023-11-08T08:39:33Z) - Efficient Neural PDE-Solvers using Quantization Aware Training [71.0934372968972]
We show that quantization can successfully lower the computational cost of inference while maintaining performance.
Our results on four standard PDE datasets and three network architectures show that quantization-aware training works across settings and three orders of FLOPs magnitudes.
arXiv Detail & Related papers (2023-08-14T09:21:19Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Finite Volume Neural Network: Modeling Subsurface Contaminant Transport [0.880802134366532]
We introduce a new approach called the Finite Volume Neural Network.
The FINN method adopts the numerical structure of the well-known Finite Volume Method for handling partial differential equations.
We show that FINN shows excellent generalization ability when applied to both synthetic and real, sparse experimental data.
arXiv Detail & Related papers (2021-04-13T08:23:44Z) - On Theory-training Neural Networks to Infer the Solution of Highly
Coupled Differential Equations [0.0]
We present insights into theory-training networks for learning the solution of highly coupled differential equations.
We introduce a theory-training technique that, by leveraging regularization, eliminates those oscillations, decreases the final training loss, and improves the accuracy of the inferred solution.
arXiv Detail & Related papers (2021-02-09T15:45:08Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.