IP-Basis PINNs: Efficient Multi-Query Inverse Parameter Estimation
- URL: http://arxiv.org/abs/2509.07245v1
- Date: Mon, 08 Sep 2025 21:43:41 GMT
- Title: IP-Basis PINNs: Efficient Multi-Query Inverse Parameter Estimation
- Authors: Shalev Manor, Mohammad Kohandel,
- Abstract summary: We present Inverse- Basis PINNs (IP-Basis PINNs) to enable rapid and efficient inference for inverse problems.<n>Our method employs an offline-online decomposition: a deep network is first trained offline to produce a rich set of basis functions.<n>We demonstrate the efficacy of IP-Basis PINNs on three diverse benchmarks.
- Score: 0.764671395172401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving inverse problems with Physics-Informed Neural Networks (PINNs) is computationally expensive for multi-query scenarios, as each new set of observed data requires a new, expensive training procedure. We present Inverse-Parameter Basis PINNs (IP-Basis PINNs), a meta-learning framework that extends the foundational work of Desai et al. (2022) to enable rapid and efficient inference for inverse problems. Our method employs an offline-online decomposition: a deep network is first trained offline to produce a rich set of basis functions that span the solution space of a parametric differential equation. For each new inverse problem online, this network is frozen, and solutions and parameters are inferred by training only a lightweight linear output layer against observed data. Key innovations that make our approach effective for inverse problems include: (1) a novel online loss formulation for simultaneous solution reconstruction and parameter identification, (2) a significant reduction in computational overhead via forward-mode automatic differentiation for PDE loss evaluation, and (3) a non-trivial validation and early-stopping mechanism for robust offline training. We demonstrate the efficacy of IP-Basis PINNs on three diverse benchmarks, including an extension to universal PINNs for unknown functional terms-showing consistent performance across constant and functional parameter estimation, a significant speedup per query over standard PINNs, and robust operation with scarce and noisy data.
Related papers
- Do physics-informed neural networks (PINNs) need to be deep? Shallow PINNs using the Levenberg-Marquardt algorithm [0.0]
This work investigates the use of shallow physics-informed neural networks (PINNs) for solving forward and inverse problems of nonlinear partial differential equations (PDEs)<n>The proposed approach is tested on several benchmark problems, including the Burgers, Schrdinger, Allen-Cahn, and three-dimensional Bratu equations.
arXiv Detail & Related papers (2026-02-09T11:05:57Z) - Physics-informed neural networks to solve inverse problems in unbounded domains [0.0]
In this work, we develop a methodology for addressing inverse problems in infinite and semi infinite domains.<n>We introduce a novel sampling strategy for the network's training points, using the negative exponential and normal distributions.<n>We show that PINNs provide a more accurate and computationally efficient solution, solving the inverse problem 1,000 times faster and in the same order of magnitude, yet with a lower relative error than PIKANs.
arXiv Detail & Related papers (2025-12-12T22:44:46Z) - MoPINNEnKF: Iterative Model Inference using generic-PINN-based ensemble Kalman filter [5.373182035720355]
Physics-informed neural networks (PINNs) have emerged as a powerful tool for solving forward and inverse problems involving partial differential equations (PDEs)<n>We propose an iterative multi-objective PINN ensemble Kalman filter (MoPINNEnKF) framework that improves the robustness and accuracy of PINNs in both forward and inverse problems.
arXiv Detail & Related papers (2025-05-31T22:20:18Z) - Equation identification for fluid flows via physics-informed neural networks [46.29203572184694]
We present a new benchmark problem for inverse PINNs based on a parametric sweep of the 2D Burgers' equation with rotational flow.
We show that a novel strategy that alternates between first- and second-order optimization proves superior to typical first-order strategies for estimating parameters.
arXiv Detail & Related papers (2024-08-30T13:17:57Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - A practical PINN framework for multi-scale problems with multi-magnitude
loss terms [3.8645424244172135]
We propose a practical deep learning framework for multi-scale problems using PINNs.
New PINN methods differ from the conventional PINN method mainly in two aspects.
The proposed methods significantly outperform the conventional PINN method in terms of computational efficiency and computational accuracy.
arXiv Detail & Related papers (2023-08-13T03:26:01Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - PhyCRNet: Physics-informed Convolutional-Recurrent Network for Solving
Spatiotemporal PDEs [8.220908558735884]
Partial differential equations (PDEs) play a fundamental role in modeling and simulating problems across a wide range of disciplines.
Recent advances in deep learning have shown the great potential of physics-informed neural networks (NNs) to solve PDEs as a basis for data-driven inverse analysis.
We propose the novel physics-informed convolutional-recurrent learning architectures (PhyCRNet and PhCRyNet-s) for solving PDEs without any labeled data.
arXiv Detail & Related papers (2021-06-26T22:22:19Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.