Efficient PINNs via Multi-Head Unimodular Regularization of the Solutions Space
- URL: http://arxiv.org/abs/2501.12116v2
- Date: Wed, 27 Aug 2025 09:33:34 GMT
- Title: Efficient PINNs via Multi-Head Unimodular Regularization of the Solutions Space
- Authors: Pedro Tarancón-Álvarez, Pablo Tejerina-Pérez, Raul Jimenez, Pavlos Protopapas,
- Abstract summary: We present a machine learning framework to facilitate the solution of nonlinear multiscale differential equations.<n>This framework is based on what is called textitmulti-head (MH) training, which involves training the network to learn a general space of all solutions.<n>We show that the multi-head approach, combined with Unimodular Regularization, significantly improves the efficiency of PINNs.
- Score: 0.6474760227870044
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-linear differential equations are a fundamental tool to describe different phenomena in nature. However, we still lack a well-established method to tackle stiff differential equations. Here we present a machine learning framework to facilitate the solution of nonlinear multiscale differential equations and, especially, inverse problems using Physics-Informed Neural Networks (PINNs). This framework is based on what is called \textit{multi-head} (MH) training, which involves training the network to learn a general space of all solutions for a given set of equations with certain variability, rather than learning a specific solution of the system. This setup is used with a second novel technique that we call Unimodular Regularization (UR) of the latent space of solutions. We show that the multi-head approach, combined with Unimodular Regularization, significantly improves the efficiency of PINNs by facilitating the transfer learning process thereby enabling the finding of solutions for nonlinear, coupled, and multiscale differential equations.
Related papers
- Solving Inverse PDE Problems using Minimization Methods and AI [0.0]
In this work, we study both aspects for systems governed by differential equations, contrasting well-established numerical methods with new AI-based techniques, specifically Physics-Informed Neural Networks (PINNs)<n>Our results suggest that PINNs can closely estimate solutions at competitive computational cost, and thus propose an effective tool for solving both direct and inverse problems for complex systems.
arXiv Detail & Related papers (2026-03-02T10:57:26Z) - A level-wise training scheme for learning neural multigrid smoothers with application to integral equations [0.5504341618778957]
Convolution-type integral equations commonly occur in signal processing and image processing.<n>We propose a novel neural multigrid scheme where learned neural operators replace classical smoothers.<n>Unlike classical smoothers, these operators are trained offline. Once trained, the neural smoothers generalize to new right-hand-side vectors without retraining, making it an efficient solver.
arXiv Detail & Related papers (2026-03-01T11:45:46Z) - PTL-PINNs: Perturbation-Guided Transfer Learning with Physics- Informed Neural Networks for Nonlinear Systems [7.961515776672606]
We propose a perturbation-guided transfer learning framework for PINNs (PTL-PINN)<n>PTL-PINNs solve an approximate linear perturbative system using closed-form expressions, enabling rapid generalization with the time complexity of matrix-vector multiplication.<n>We show that PTL-PINNs achieve accuracy comparable to various Runge-Kutta methods, with computational speeds up to one order of magnitude faster.
arXiv Detail & Related papers (2026-01-17T16:09:33Z) - DInf-Grid: A Neural Differential Equation Solver with Differentiable Feature Grids [73.28614344779076]
We present a differentiable grid-based representation for efficiently solving differential equations (DEs)<n>Our results demonstrate a 5-20x speed-up over coordinate-based methods, solving differential equations in seconds or minutes while maintaining comparable accuracy and compactness.
arXiv Detail & Related papers (2026-01-15T18:59:57Z) - Learning and discovering multiple solutions using physics-informed neural networks with random initialization and deep ensemble [10.047968926134363]
We explore the capability of physics-informed neural networks (PINNs) to discover multiple solutions.
PINNs can effectively uncover multiple solutions to nonlinear ordinary and partial differential equations.
We propose utilizing PINN-generated solutions as initial conditions or initial guesses for conventional numerical solvers.
arXiv Detail & Related papers (2025-03-08T19:32:22Z) - MultiSTOP: Solving Functional Equations with Reinforcement Learning [56.073581097785016]
We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics.
This new methodology produces actual numerical solutions instead of bounds on them.
arXiv Detail & Related papers (2024-04-23T10:51:31Z) - A Block-Coordinate Approach of Multi-level Optimization with an
Application to Physics-Informed Neural Networks [0.0]
We propose a multi-level algorithm for the solution of nonlinear optimization problems and analyze its evaluation complexity.
We apply it to the solution of partial differential equations using physics-informed neural networks (PINNs) and show on a few test problems that the approach results in better solutions and significant computational savings.
arXiv Detail & Related papers (2023-05-23T19:12:02Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - One-Shot Transfer Learning of Physics-Informed Neural Networks [2.6084034060847894]
We present a framework for transfer learning PINNs that results in one-shot inference for linear systems of both ordinary and partial differential equations.
This means that highly accurate solutions to many unknown differential equations can be obtained instantaneously without retraining an entire network.
arXiv Detail & Related papers (2021-10-21T17:14:58Z) - Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable
domain decomposition approach for solving differential equations [20.277873724720987]
We propose a new, scalable approach for solving large problems relating to differential equations called Finite Basis PINNs (FBPINNs)
FBPINNs are inspired by classical finite element methods, where the solution of the differential equation is expressed as the sum of a finite set of basis functions with compact support.
In FBPINNs neural networks are used to learn these basis functions, which are defined over small, overlapping subdomain problems.
arXiv Detail & Related papers (2021-07-16T13:03:47Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Learning optimal multigrid smoothers via neural networks [1.9336815376402723]
We propose an efficient framework for learning optimized smoothers from operator stencils in the form of convolutional neural networks (CNNs)
CNNs are trained on small-scale problems from a given type of PDEs based on a supervised loss function derived from multigrid convergence theories.
Numerical results on anisotropic rotated Laplacian problems demonstrate improved convergence rates and solution time compared with classical hand-crafted relaxation methods.
arXiv Detail & Related papers (2021-02-24T05:02:54Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.