Supervised Metric Regularization Through Alternating Optimization for Multi-Regime Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2602.09980v1
- Date: Tue, 10 Feb 2026 17:06:57 GMT
- Title: Supervised Metric Regularization Through Alternating Optimization for Multi-Regime Physics-Informed Neural Networks
- Authors: Enzo Nicolas Spotorno, Josafat Ribeiro Leal, Antonio Augusto Frohlich,
- Abstract summary: PINNs often face challenges when modeling dynamical systems with sharp regime transitions, such as bifurcations.<n>We propose a Topology-Aware PINN (TAPINN) that aims to mitigate this challenge by the latent space via Supervised Metric Regularization.<n>Preliminary experiments on the Duffing demonstrate that while standard baselines suffer from spectral bias and high-capacity gradient networks overfit, our approach achieves stable convergence with 2.18x lower variance than a multi-output Sobolev Error baseline, and 5x fewer parameters than a hypernetwork-based alternative.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard Physics-Informed Neural Networks (PINNs) often face challenges when modeling parameterized dynamical systems with sharp regime transitions, such as bifurcations. In these scenarios, the continuous mapping from parameters to solutions can result in spectral bias or "mode collapse", where the network averages distinct physical behaviors. We propose a Topology-Aware PINN (TAPINN) that aims to mitigate this challenge by structuring the latent space via Supervised Metric Regularization. Unlike standard parametric PINNs that map physical parameters directly to solutions, our method conditions the solver on a latent state optimized to reflect the metric-based separation between regimes, showing ~49% lower physics residual (0.082 vs. 0.160). We train this architecture using a phase-based Alternating Optimization (AO) schedule to manage gradient conflicts between the metric and physics objectives. Preliminary experiments on the Duffing Oscillator demonstrate that while standard baselines suffer from spectral bias and high-capacity Hypernetworks overfit (memorizing data while violating physics), our approach achieves stable convergence with 2.18x lower gradient variance than a multi-output Sobolev Error baseline, and 5x fewer parameters than a hypernetwork-based alternative.
Related papers
- The Procrustean Bed of Time Series: The Optimization Bias of Point-wise Loss [53.542743390809356]
This paper aims to provide a first-principles analysis of the Expectation of Optimization Bias (EOB)<n>Our analysis reveals a fundamental paradigm paradox: the more deterministic and structured the time series, the more severe the bias by point-wise loss function.<n>We present a concrete solution that simultaneously achieves both principles via DFT or DWT.
arXiv Detail & Related papers (2025-12-21T06:08:22Z) - Towards a Unified Analysis of Neural Networks in Nonparametric Instrumental Variable Regression: Optimization and Generalization [66.08522228989634]
We establish the first global convergence result of neural networks for two stage least squares (2SLS) approach in nonparametric instrumental variable regression (NPIV)<n>This is achieved by adopting a lifted perspective through mean-field Langevin dynamics (MFLD)
arXiv Detail & Related papers (2025-11-18T17:51:17Z) - Graph Neural Regularizers for PDE Inverse Problems [62.49743146797144]
We present a framework for solving a broad class of ill-posed inverse problems governed by partial differential equations (PDEs)<n>The forward problem is numerically solved using the finite element method (FEM)<n>We employ physics-inspired graph neural networks as learned regularizers, providing a robust, interpretable, and generalizable alternative to standard approaches.
arXiv Detail & Related papers (2025-10-23T21:43:25Z) - Hephaestus: Mixture Generative Modeling with Energy Guidance for Large-scale QoS Degradation [44.97875113025023]
We study the Quality of Service Degradation (QoSD) problem, in which an adversary perturbs edge weights to degrade network performance.<n>No prior model directly tackles the RefineD problem under nonlinear edge-weight functions.<n>This work proposes PIMMA, a self-reinforcing framework that synthesizes feasible solutions in latent space.
arXiv Detail & Related papers (2025-10-19T22:48:35Z) - APRIL: Auxiliary Physically-Redundant Information in Loss - A physics-informed framework for parameter estimation with a gravitational-wave case study [0.0]
Physics-Informed Neural Networks (PINNs) embed the partial differential equations governing the system under study directly into the training of Neural Networks.<n>We present a complementary approach by including auxiliary physically-redundant information in loss.<n>We mathematically demonstrate that these terms preserve the true physical minimum while reshaping the loss landscape.
arXiv Detail & Related papers (2025-10-15T15:34:19Z) - Kernel-Adaptive PI-ELMs for Forward and Inverse Problems in PDEs with Sharp Gradients [0.0]
This paper introduces the Kernel Adaptive Physics-Informed Extreme Learning Machine (KAPI-ELM)<n>It is designed to solve both forward and inverse Partial Differential Equation (PDE) problems involving localized sharp gradients.<n>KAPI-ELM achieves state-of-the-art accuracy in both forward and inverse settings.
arXiv Detail & Related papers (2025-07-14T13:03:53Z) - Equivariant Eikonal Neural Networks: Grid-Free, Scalable Travel-Time Prediction on Homogeneous Spaces [42.33765011920294]
We introduce a novel framework that integrates Equivariant Neural Fields (ENFs) with Neural Eikonal solvers.<n>Our approach employs a single neural field where a unified shared backbone is conditioned on signal-specific latent variables.<n>We validate our approach through applications in seismic travel-time modeling of 2D, 3D, and spherical benchmark datasets.
arXiv Detail & Related papers (2025-05-21T21:29:18Z) - Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems [49.819436680336786]
We propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems.<n>Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive implicit process prior that captures complex, non-stationary transition dynamics.<n>Our ETGPSSM outperforms existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy.
arXiv Detail & Related papers (2025-03-24T03:19:45Z) - Preconditioned FEM-based Neural Networks for Solving Incompressible Fluid Flows and Related Inverse Problems [41.94295877935867]
numerical simulation and optimization of technical systems described by partial differential equations is expensive.<n>A comparatively new approach in this context is to combine the good approximation properties of neural networks with the classical finite element method.<n>In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively.
arXiv Detail & Related papers (2024-09-06T07:17:01Z) - Grad-Shafranov equilibria via data-free physics informed neural networks [0.0]
We show that PINNs can accurately and effectively solve the Grad-Shafranov equation with several different boundary conditions.
We introduce a parameterized PINN framework, expanding the input space to include variables such as pressure, aspect ratio, elongation, and triangularity.
arXiv Detail & Related papers (2023-11-22T16:08:38Z) - Neural network analysis of neutron and X-ray reflectivity data:
Incorporating prior knowledge for tackling the phase problem [141.5628276096321]
We present an approach that utilizes prior knowledge to regularize the training process over larger parameter spaces.
We demonstrate the effectiveness of our method in various scenarios, including multilayer structures with box model parameterization.
In contrast to previous methods, our approach scales favorably when increasing the complexity of the inverse problem.
arXiv Detail & Related papers (2023-06-28T11:15:53Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Kinematically consistent recurrent neural networks for learning inverse
problems in wave propagation [0.0]
We propose a new kinematically consistent, physics-based machine learning model.
In particular, we attempt to perform physically interpretable learning of inverse problems in wave propagation.
Even with modest training data, the kinematically consistent network can reduce the $L_infty$ error norms of the plain LSTM predictions by about 45% and 55%, respectively.
arXiv Detail & Related papers (2021-10-08T05:51:32Z) - Optimal Transport Based Refinement of Physics-Informed Neural Networks [0.0]
We propose a refinement strategy to the well-known Physics-Informed Neural Networks (PINNs) for solving partial differential equations (PDEs) based on the concept of Optimal Transport (OT)
PINNs solvers have been found to suffer from a host of issues: spectral bias in fully-connected pathologies, unstable gradient, and difficulties with convergence and accuracy.
We present a novel training strategy for solving the Fokker-Planck-Kolmogorov Equation (FPKE) using OT-based sampling to supplement the existing PINNs framework.
arXiv Detail & Related papers (2021-05-26T02:51:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.