GINNs: Graph-Informed Neural Networks for Multiscale Physics
- URL: http://arxiv.org/abs/2006.14807v1
- Date: Fri, 26 Jun 2020 05:47:45 GMT
- Title: GINNs: Graph-Informed Neural Networks for Multiscale Physics
- Authors: Eric J. Hall and S{\o}ren Taverniers and Markos A. Katsoulakis and
Daniel M. Tartakovsky
- Abstract summary: Graph-Informed Neural Network (GINN) is a hybrid approach combining deep learning with probabilistic graphical models (PGMs)
GINNs produce kernel density estimates of relevant non-Gaussian, skewed QoIs with tight confidence intervals.
- Score: 1.1470070927586016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the concept of a Graph-Informed Neural Network (GINN), a hybrid
approach combining deep learning with probabilistic graphical models (PGMs)
that acts as a surrogate for physics-based representations of multiscale and
multiphysics systems. GINNs address the twin challenges of removing intrinsic
computational bottlenecks in physics-based models and generating large data
sets for estimating probability distributions of quantities of interest (QoIs)
with a high degree of confidence. Both the selection of the complex physics
learned by the NN and its supervised learning/prediction are informed by the
PGM, which includes the formulation of structured priors for tunable control
variables (CVs) to account for their mutual correlations and ensure physically
sound CV and QoI distributions. GINNs accelerate the prediction of QoIs
essential for simulation-based decision-making where generating sufficient
sample data using physics-based models alone is often prohibitively expensive.
Using a real-world application grounded in supercapacitor-based energy storage,
we describe the construction of GINNs from a Bayesian network-embedded
homogenized model for supercapacitor dynamics, and demonstrate their ability to
produce kernel density estimates of relevant non-Gaussian, skewed QoIs with
tight confidence intervals.
Related papers
- EveNet: A Foundation Model for Particle Collision Data Analysis [11.464004875705067]
EveNet is an event-level foundation model pretrained on 500 million simulated collision events.<n>By leveraging a shared particle-cloud representation, EveNet outperforms state-of-the-art baselines across diverse tasks.
arXiv Detail & Related papers (2026-01-23T19:01:51Z) - Graph Network-based Structural Simulator: Graph Neural Networks for Structural Dynamics [40.190675168132124]
We introduce the Graph Network-based Structural Simulator (GNSS), a GNN framework for surrogate modeling of dynamic structural problems.<n>We evaluate on a case study involving a beam excited by a 50kHz Hanning-modulated pulse.<n>The results show that accurately reproduces the physics of the problem over hundreds of timesteps and generalizes to unseen loading conditions, where existing GNNs fail to converge or deliver meaningful predictions.
arXiv Detail & Related papers (2025-10-29T16:47:24Z) - Reframing Generative Models for Physical Systems using Stochastic Interpolants [45.16806809746592]
Generative models have emerged as powerful surrogates for physical systems, demonstrating increased accuracy, stability, and/or statistical fidelity.<n>Most approaches rely on iteratively denoising a Gaussian, a choice that may not be the most effective for autoregressive prediction tasks in PDEs and dynamical systems such as climate.<n>In this work, we benchmark generative models across diverse physical domains and tasks, and highlight the role of interpolants.
arXiv Detail & Related papers (2025-09-30T14:02:00Z) - From Distributional to Quantile Neural Basis Models: the case of Electricity Price Forecasting [42.062078728472734]
We introduce the Quantile Neural Basis Model, which incorporates the interpretability principles of Quantile Generalized Additive Models.<n>We validate our approach on day-ahead electricity price forecasting, achieving predictive performance comparable to distributional and quantile regression neural networks.
arXiv Detail & Related papers (2025-09-17T15:55:59Z) - Quantum-Boosted High-Fidelity Deep Learning [7.198071279424711]
We introduce the Quantum Boltzmann Machine-Variational Autoencoder (QBM-VAE), a large-scale and long-time stable hybrid quantum-classical architecture.<n>Our framework leverages a quantum processor for efficient sampling from the Boltzmann distribution, enabling its use as a powerful prior within a deep generative model.
arXiv Detail & Related papers (2025-08-15T03:51:20Z) - Detecting Entanglement in High-Spin Quantum Systems via a Stacking Ensemble of Machine Learning Models [0.0]
This study examines the effectiveness of ensemble machine learning models as a reliable and scalable approach for estimating entanglement, measured by negativity, in quantum systems.<n>We construct an ensemble regressor integrating Neural Networks (NNs), XGBoost (XGB), and Extra Trees (ET)<n>The ensemble model with stacking meta-learner demonstrates robust performance by CatBoost (CB), accurately predicting negativity across different dimensionalities and state types.
arXiv Detail & Related papers (2025-07-17T04:34:11Z) - Multiscale Analysis of Woven Composites Using Hierarchical Physically Recurrent Neural Networks [0.0]
Multiscale homogenization of woven composites requires detailed micromechanical evaluations.
This study introduces a Hierarchical Physically Recurrent Neural Network (HPRNN) employing two levels of surrogate modeling.
arXiv Detail & Related papers (2025-03-06T19:02:32Z) - Positional Encoder Graph Quantile Neural Networks for Geographic Data [4.277516034244117]
We introduce the Positional Graph Quantile Neural Network (PE-GQNN), a novel method that integrates PE-GNNs, Quantile Neural Networks, and recalibration techniques in a fully nonparametric framework.
Experiments on benchmark datasets demonstrate that PE-GQNN significantly outperforms existing state-of-the-art methods in both predictive accuracy and uncertainty quantification.
arXiv Detail & Related papers (2024-09-27T16:02:12Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - Multi-fidelity physics constrained neural networks for dynamical systems [16.6396704642848]
We propose the Multi-Scale Physics-Constrained Neural Network (MSPCNN)
MSPCNN offers a novel methodology for incorporating data with different levels of fidelity into a unified latent space.
Unlike conventional methods, MSPCNN also manages to employ multi-fidelity data to train the predictive model.
arXiv Detail & Related papers (2024-02-03T05:05:26Z) - Evaluation of machine learning architectures on the quantification of
epistemic and aleatoric uncertainties in complex dynamical systems [0.0]
Uncertainty Quantification (UQ) is a self assessed estimate of the model error.
We examine several machine learning techniques, including both Gaussian processes and a family UQ-augmented neural networks.
We evaluate UQ accuracy (distinct from model accuracy) using two metrics: the distribution of normalized residuals on validation data, and the distribution of estimated uncertainties.
arXiv Detail & Related papers (2023-06-27T02:35:25Z) - Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient
Prediction [45.84205238554709]
We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions.
We include the Gibbs-Duhem equation explicitly in the loss function for training neural networks.
arXiv Detail & Related papers (2023-05-31T07:36:45Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - On Energy-Based Models with Overparametrized Shallow Neural Networks [44.74000986284978]
Energy-based models (EBMs) are a powerful framework for generative modeling.
In this work we focus on shallow neural networks.
We show that models trained in the so-called "active" regime provide a statistical advantage over their associated "lazy" or kernel regime.
arXiv Detail & Related papers (2021-04-15T15:34:58Z) - ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations [86.41674945012369]
We develop a scalable and expressive Graph Neural Networks model, ForceNet, to approximate atomic forces.
Our proposed ForceNet is able to predict atomic forces more accurately than state-of-the-art physics-based GNNs.
arXiv Detail & Related papers (2021-03-02T03:09:06Z) - Physics-aware, deep probabilistic modeling of multiscale dynamics in the
Small Data regime [0.0]
The present paper offers a probabilistic perspective that simultaneously identifies predictive, lower-dimensional coarse-grained (CG) variables as well as their dynamics.
We make use of the expressive ability of deep neural networks in order to represent the right-hand side of the CG evolution law.
We demonstrate the efficacy of the proposed framework in a high-dimensional system of moving particles.
arXiv Detail & Related papers (2021-02-08T15:04:05Z) - Probabilistic electric load forecasting through Bayesian Mixture Density
Networks [70.50488907591463]
Probabilistic load forecasting (PLF) is a key component in the extended tool-chain required for efficient management of smart energy grids.
We propose a novel PLF approach, framed on Bayesian Mixture Density Networks.
To achieve reliable and computationally scalable estimators of the posterior distributions, both Mean Field variational inference and deep ensembles are integrated.
arXiv Detail & Related papers (2020-12-23T16:21:34Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.