APRIL: Auxiliary Physically-Redundant Information in Loss - A physics-informed framework for parameter estimation with a gravitational-wave case study
- URL: http://arxiv.org/abs/2510.13677v1
- Date: Wed, 15 Oct 2025 15:34:19 GMT
- Title: APRIL: Auxiliary Physically-Redundant Information in Loss - A physics-informed framework for parameter estimation with a gravitational-wave case study
- Authors: Matteo Scialpi, Francesco Di Clemente, Leigh Smith, MichaĆ Bejger,
- Abstract summary: Physics-Informed Neural Networks (PINNs) embed the partial differential equations governing the system under study directly into the training of Neural Networks.<n>We present a complementary approach by including auxiliary physically-redundant information in loss.<n>We mathematically demonstrate that these terms preserve the true physical minimum while reshaping the loss landscape.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-Informed Neural Networks (PINNs) embed the partial differential equations (PDEs) governing the system under study directly into the training of Neural Networks, ensuring solutions that respect physical laws. While effective for single-system problems, standard PINNs scale poorly to datasets containing many realizations of the same underlying physics with varying parameters. To address this limitation, we present a complementary approach by including auxiliary physically-redundant information in loss (APRIL), i.e. augment the standard supervised output-target loss with auxiliary terms which exploit exact physical redundancy relations among outputs. We mathematically demonstrate that these terms preserve the true physical minimum while reshaping the loss landscape, improving convergence toward physically consistent solutions. As a proof-of-concept, we benchmark APRIL on a fully-connected neural network for gravitational wave (GW) parameter estimation (PE). We use simulated, noise-free compact binary coalescence (CBC) signals, focusing on inspiral-frequency waveforms to recover the chirp mass $\mathcal{M}$, the total mass $M_\mathrm{tot}$, and symmetric mass ratio $\eta$ of the binary. In this controlled setting, we show that APRIL achieves up to an order-of-magnitude improvement in test accuracy, especially for parameters that are otherwise difficult to learn. This method provides physically consistent learning for large multi-system datasets and is well suited for future GW analyses involving realistic noise and broader parameter ranges.
Related papers
- Neural network optimization strategies and the topography of the loss landscape [45.88028371034407]
We investigate neural network learning by gradient descent (SGD)<n>We use several computational tools to investigate neural network parameters obtained by these two optimization methods.
arXiv Detail & Related papers (2026-02-24T17:49:13Z) - naPINN: Noise-Adaptive Physics-Informed Neural Networks for Recovering Physics from Corrupted Measurement [3.450547277166974]
We propose the Noise-Adaptive Physics-Informed Neural Network (naPINN), which robustly recovers physical solutions from corrupted measurements.<n>naPINN embeds an energy-based model into the training loop to learn the latent distribution of prediction residuals.<n>We demonstrate the efficacy of naPINN on various benchmark partial differential equations corrupted by non-Gaussian noise and varying rates of outliers.
arXiv Detail & Related papers (2026-01-30T06:03:33Z) - PhyG-MoE: A Physics-Guided Mixture-of-Experts Framework for Energy-Efficient GNSS Interference Recognition [49.955269674859004]
This paper introduces PhyG-MoE (Physics-Guided Mixture-of-Experts), a framework designed to align model capacity with signal complexity.<n>Unlike static architectures, the proposed system employs a spectrum-based gating mechanism that routes signals based on their spectral feature entanglement.<n>A high-capacity TransNeXt expert is activated on-demand to disentangle complex features in saturated scenarios, while lightweight experts handle fundamental signals to minimize latency.
arXiv Detail & Related papers (2026-01-19T07:57:52Z) - Deep Deterministic Nonlinear ICA via Total Correlation Minimization with Matrix-Based Entropy Functional [41.05541240448253]
Blind source separation, particularly through independent component analysis (ICA), is widely utilized across various signal processing domains.<n>We present deep deterministic nonlinear independent component analysis (DDICA), a novel deep neural network-based framework designed to address these limitations.<n>We validated DDICA across a range of applications, including simulated signal mixtures, hyperspectral image unmixing, modeling of primary visual receptive fields, and resting-state functional magnetic resonance imaging (fMRI) data analysis.
arXiv Detail & Related papers (2025-12-31T19:44:19Z) - Performance Guarantees for Quantum Neural Estimation of Entropies [31.955071410400947]
Quantum neural estimators (QNEs) combine classical neural networks with parametrized quantum circuits.<n>We study formal guarantees for QNEs of measured relative entropies in the form of non-asymptotic error risk bounds.<n>Our theory aims to facilitate principled implementation of QNEs for measured relative entropies.
arXiv Detail & Related papers (2025-11-24T16:36:06Z) - PINNverse: Accurate parameter estimation in differential equations from noisy data with constrained physics-informed neural networks [0.0]
Physics-Informed Neural Networks (PINNs) have emerged as effective tools for solving such problems.<n>We introduce PINNverse, a training paradigm that addresses these limitations by reformulating the learning process as a constrained differential optimization problem.<n>We demonstrate robust and accurate parameter estimation from noisy data in four classical ODE and PDE models from physics and biology.
arXiv Detail & Related papers (2025-04-07T16:34:57Z) - Hamiltonian Neural Networks approach to fuzzball geodesics [39.58317527488534]
Hamiltonian Neural Networks (HNNs) are tools that minimize a loss function to solve Hamilton equations of motion.<n>In this work, we implement several HNNs trained to solve, with high accuracy, the Hamilton equations for a massless probe moving inside a smooth and horizonless geometry known as D1-D5 circular fuzzball.
arXiv Detail & Related papers (2025-02-28T09:25:49Z) - Network scaling and scale-driven loss balancing for intelligent poroelastography [2.665036498336221]
A deep learning framework is developed for multiscale characterization of poroelastic media from full waveform data.
Two major challenges impede direct application of existing state-of-the-art techniques for this purpose.
We propose the idea of emphnetwork scaling where the neural property maps are constructed by unit shape functions composed into a scaling layer.
arXiv Detail & Related papers (2024-10-27T23:06:29Z) - Learning Physics From Video: Unsupervised Physical Parameter Estimation for Continuous Dynamical Systems [49.11170948406405]
We propose an unsupervised method to estimate the physical parameters of known, continuous governing equations from single videos.<n>We take the field closer to reality by recording Delfys75: our own real-world dataset of 75 videos for five different types of dynamical systems.
arXiv Detail & Related papers (2024-10-02T09:44:54Z) - MPIPN: A Multi Physics-Informed PointNet for solving parametric
acoustic-structure systems [33.32926047057572]
Multi Physics-Informed PointNet (MPIPN) is proposed for solving parametric acoustic-structure systems.
MPIPN induces an enhanced point-cloud architecture that encompasses explicit physical quantities and geometric features of computational domains.
The framework is trained by adaptive physics-informed loss functions for corresponding computational domains.
arXiv Detail & Related papers (2024-03-02T08:27:05Z) - Physics-Informed Neural Networks for Material Model Calibration from
Full-Field Displacement Data [0.0]
We propose PINNs for the calibration of models from full-field displacement and global force data in a realistic regime.
We demonstrate that the enhanced PINNs are capable of identifying material parameters from both experimental one-dimensional data and synthetic full-field displacement data.
arXiv Detail & Related papers (2022-12-15T11:01:32Z) - Physics-enhanced deep surrogates for partial differential equations [30.731686639510517]
We present a "physics-enhanced deep-surrogate" ("PEDS") approach towards developing fast surrogate models for complex physical systems.
Specifically, a combination of a low-fidelity, explainable physics simulator and a neural network generator is proposed, which is trained end-to-end to globally match the output of an expensive high-fidelity numerical solver.
arXiv Detail & Related papers (2021-11-10T18:43:18Z) - Data vs. Physics: The Apparent Pareto Front of Physics-Informed Neural Networks [8.487185704099925]
Physics-informed neural networks (PINNs) have emerged as a promising deep learning method.
PINNs are difficult to train and often require a careful tuning of loss weights when data and physics loss functions are combined.
arXiv Detail & Related papers (2021-05-03T13:47:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.