A Neural-Operator Preconditioned Newton Method for Accelerated Nonlinear Solvers
- URL: http://arxiv.org/abs/2511.08811v1
- Date: Thu, 13 Nov 2025 01:09:39 GMT
- Title: A Neural-Operator Preconditioned Newton Method for Accelerated Nonlinear Solvers
- Authors: Youngkyu Lee, Shanqing Liu, Jerome Darbon, George Em Karniadakis,
- Abstract summary: We introduce a fixed-point neural operator (FPNO) that learns the direct mapping from the current iterate to the solution by emulating fixed-point iterations.<n>Unlike traditional line-search or trust-region algorithms, the proposed FPNO adaptively employs negative step sizes to effectively mitigate the effects of unbalanced nonlinearities.
- Score: 4.9270397649918
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a novel neural preconditioned Newton (NP-Newton) method for solving parametric nonlinear systems of equations. To overcome the stagnation or instability of Newton iterations caused by unbalanced nonlinearities, we introduce a fixed-point neural operator (FPNO) that learns the direct mapping from the current iterate to the solution by emulating fixed-point iterations. Unlike traditional line-search or trust-region algorithms, the proposed FPNO adaptively employs negative step sizes to effectively mitigate the effects of unbalanced nonlinearities. Through numerical experiments we demonstrate the computational efficiency and robustness of the proposed NP-Newton method across multiple real-world applications, especially for very strong nonlinearities.
Related papers
- Gradient Descent as a Perceptron Algorithm: Understanding Dynamics and Implicit Acceleration [67.12978375116599]
We show that the steps of gradient descent (GD) reduce to those of generalized perceptron algorithms.<n>This helps explain the optimization dynamics and the implicit acceleration phenomenon observed in neural networks.
arXiv Detail & Related papers (2025-12-12T14:16:35Z) - Operator Learning at Machine Precision [36.02387239941959]
We introduce CHONKNORIS (Cholesky Newton--Kantorovich Neural Operator Residual Iterative System), an operator learning paradigm that can achieve machine precision.<n>ChoNKNORIS draws on numerical analysis: many nonlinear forward and inverse PDE problems are solvable by Newton-type methods.<n>Our model is able to accurately solve unseen nonlinear PDEs such as the Klein--Gordon and Sine--Gordon equations.
arXiv Detail & Related papers (2025-11-25T06:49:25Z) - Neural-Initialized Newton: Accelerating Nonlinear Finite Elements via Operator Learning [0.0]
We propose a Newton-based scheme to accelerate the parametric solution of nonlinear problems in computational solid mechanics.<n>A physics informed conditional neural field is trained to approximate the nonlinear parametric solutionof the governing equations.
arXiv Detail & Related papers (2025-11-10T07:45:10Z) - A fast neural hybrid Newton solver adapted to implicit methods for nonlinear dynamics [6.642649934130245]
We propose a novel deep learning based hybrid Newton's method to accelerate this solution of the nonlinear time step system for stiff time-evolution nonlinear equations.<n>A quantifiable rate of improvement in Newton's method is provided and we analyse the upper bound of the generalisation error of our unsupervised learning strategy.
arXiv Detail & Related papers (2024-07-04T14:02:10Z) - On Newton's Method to Unlearn Neural Networks [44.85793893441989]
We seek approximate unlearning algorithms for neural networks (NNs) that return identical models to the retrained oracle.
We propose CureNewton's method, a principle approach that leverages cubic regularization to handle the Hessian degeneracy effectively.
Experiments across different models and datasets show that our method can achieve competitive unlearning performance to the state-of-the-art algorithm in practical unlearning settings.
arXiv Detail & Related papers (2024-06-20T17:12:20Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - A Structure-Guided Gauss-Newton Method for Shallow ReLU Neural Network [18.06366638807982]
We propose a structure-guided Gauss-Newton (SgGN) method for solving least squares problems using a shallow ReLU neural network.<n>The method effectively takes advantage of both the least squares structure and the neural network structure of the objective function.
arXiv Detail & Related papers (2024-04-07T20:24:44Z) - Enriched Physics-informed Neural Networks for Dynamic
Poisson-Nernst-Planck Systems [0.8192907805418583]
This paper proposes a meshless deep learning algorithm, enriched physics-informed neural networks (EPINNs) to solve dynamic Poisson-Nernst-Planck (PNP) equations.
The EPINNs takes the traditional physics-informed neural networks as the foundation framework, and adds the adaptive loss weight to balance the loss functions.
Numerical results indicate that the new method has better applicability than traditional numerical methods in solving such coupled nonlinear systems.
arXiv Detail & Related papers (2024-02-01T02:57:07Z) - Improving Pseudo-Time Stepping Convergence for CFD Simulations With
Neural Networks [44.99833362998488]
Navier-Stokes equations may exhibit a highly nonlinear behavior.
The system of nonlinear equations resulting from the discretization of the Navier-Stokes equations can be solved using nonlinear iteration methods, such as Newton's method.
In this paper, pseudo-transient continuation is employed in order to improve nonlinear convergence.
arXiv Detail & Related papers (2023-10-10T15:45:19Z) - Stochastic Nonlinear Control via Finite-dimensional Spectral Dynamic Embedding [20.43835169613882]
This paper proposes an approach, Spectral Dynamics Embedding Control (SDEC), to optimal control for nonlinear systems.<n>It reveals an infinite-dimensional feature representation induced by the system's nonlinear dynamics, enabling a linear representation of the state-action value function.<n>For practical implementation, this representation is approximated using finite-dimensional truncations.
arXiv Detail & Related papers (2023-04-08T04:23:46Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.