Least-Squares-Embedded Optimization for Accelerated Convergence of PINNs in Acoustic Wavefield Simulations
- URL: http://arxiv.org/abs/2504.16553v1
- Date: Wed, 23 Apr 2025 09:32:14 GMT
- Title: Least-Squares-Embedded Optimization for Accelerated Convergence of PINNs in Acoustic Wavefield Simulations
- Authors: Mohammad Mahdi Abedi, David Pardo, Tariq Alkhalifah,
- Abstract summary: PINNs have shown promise in solving partial differential equations.<n>For scattered acoustic wavefield simulation based on Helmholtz equation, we derive a hybrid optimization framework.<n>This framework accelerates training convergence by embedding a least-squares (LS) solver directly into the GD loss function.
- Score: 2.8948274245812327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-Informed Neural Networks (PINNs) have shown promise in solving partial differential equations (PDEs), including the frequency-domain Helmholtz equation. However, standard training of PINNs using gradient descent (GD) suffers from slow convergence and instability, particularly for high-frequency wavefields. For scattered acoustic wavefield simulation based on Helmholtz equation, we derive a hybrid optimization framework that accelerates training convergence by embedding a least-squares (LS) solver directly into the GD loss function. This formulation enables optimal updates for the linear output layer. Our method is applicable with or without perfectly matched layers (PML), and we provide practical tensor-based implementations for both scenarios. Numerical experiments on benchmark velocity models demonstrate that our approach achieves faster convergence, higher accuracy, and improved stability compared to conventional PINN training. In particular, our results show that the LS-enhanced method converges rapidly even in cases where standard GD-based training fails. The LS solver operates on a small normal matrix, ensuring minimal computational overhead and making the method scalable for large-scale wavefield simulations.
Related papers
- Gabor-Enhanced Physics-Informed Neural Networks for Fast Simulations of Acoustic Wavefields [2.8948274245812327]
Physics-Informed Neural Networks (PINNs) have gained increasing attention for solving partial differential equations.<n>We propose a simplified PINN framework that incorporates Gabor functions.<n>We demonstrate its superior accuracy, faster convergence, and better robustness features compared to both traditional PINNs and earlier Gabor-based PINNs.
arXiv Detail & Related papers (2025-02-24T13:25:40Z) - Training Neural ODEs Using Fully Discretized Simultaneous Optimization [2.290491821371513]
Training Neural Ordinary Differential Equations (Neural ODEs) requires solving differential equations at each epoch, leading to high computational costs.
In particular, we employ a collocation-based, fully discretized formulation and use IPOPT-a solver for large-scale nonlinear optimization.
Our results show significant potential for (collocation-based) simultaneous Neural ODE training pipelines.
arXiv Detail & Related papers (2025-02-21T18:10:26Z) - Multi-frequency wavefield solutions for variable velocity models using meta-learning enhanced low-rank physics-informed neural network [3.069335774032178]
Physics-informed neural networks (PINNs) face significant challenges in modeling multi-frequency wavefields in complex velocity models.
We propose Meta-LRPINN, a novel framework that combines low-rank parameterization with meta-learning and frequency embedding.
Numerical experiments show that Meta-LRPINN achieves much fast convergence speed and much high accuracy compared to baseline methods.
arXiv Detail & Related papers (2025-02-02T20:12:39Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Enhancing Low-Order Discontinuous Galerkin Methods with Neural Ordinary Differential Equations for Compressible Navier--Stokes Equations [0.1578515540930834]
We introduce an end-to-end differentiable framework for solving the compressible Navier-Stokes equations.<n>This integrated approach combines a differentiable discontinuous Galerkin solver with a neural network source term.<n>We demonstrate the performance of the proposed framework through two examples.
arXiv Detail & Related papers (2023-10-29T04:26:23Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Wave simulation in non-smooth media by PINN with quadratic neural
network and PML condition [2.7651063843287718]
The recently proposed physics-informed neural network (PINN) has achieved successful applications in solving a wide range of partial differential equations (PDEs)
In this paper, we solve the acoustic and visco-acoustic scattered-field wave equation in the frequency domain with PINN instead of the wave equation to remove source perturbation.
We show that PML and quadratic neurons improve the results as well as attenuation and discuss the reason for this improvement.
arXiv Detail & Related papers (2022-08-16T13:29:01Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions [47.276004075767176]
We develop software for convex optimization of two-layer neural networks with ReLU activation functions.<n>We show that convex gated ReLU models obtain data-dependent algorithms for the ReLU training problem.
arXiv Detail & Related papers (2022-02-02T23:50:53Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.