A Physics-Augmented GraphGPS Framework for the Reconstruction of 3D Riemann Problems from Sparse Data
- URL: http://arxiv.org/abs/2505.21421v1
- Date: Tue, 27 May 2025 16:49:58 GMT
- Title: A Physics-Augmented GraphGPS Framework for the Reconstruction of 3D Riemann Problems from Sparse Data
- Authors: Rami Cassia, Rich Kerswell,
- Abstract summary: We develop a machine learning recipe, known as GraphGPS, for reconstructing canonical compressible flows from sparse observations.<n>We modify message-passing such that information flows strictly from known nodes only, which results in computational savings.<n>We also show that the GraphGPS framework outperforms numerous machine learning benchmarks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In compressible fluid flow, reconstructing shocks, discontinuities, rarefactions, and their interactions from sparse measurements is an important inverse problem with practical applications. Moreover, physics-informed machine learning has recently become an increasingly popular approach for performing reconstructions tasks. In this work we explore a machine learning recipe, known as GraphGPS, for reconstructing canonical compressible flows known as 3D Riemann problems from sparse observations, in a physics-informed manner. The GraphGPS framework combines the benefits of positional encodings, local message-passing of graphs, and global contextual awareness, and we explore the latter two components through an ablation study. Furthermore, we modify the aggregation step of message-passing such that it is aware of shocks and discontinuities, resulting in sharper reconstructions of these features. Additionally, we modify message-passing such that information flows strictly from known nodes only, which results in computational savings, better training convergence, and no degradation of reconstruction accuracy. We also show that the GraphGPS framework outperforms numerous machine learning benchmarks.
Related papers
- Graph Transformers for inverse physics: reconstructing flows around arbitrary 2D airfoils [0.0]
We introduce a Graph Transformer framework that serves as a general inverse physics engine on meshes.<n>We evaluate this framework on a dataset of steady-state RANS simulations around diverse airfoil geometries.<n>We conduct experiments and provide insights into the relative importance of local geometric processing and global attention mechanisms in mesh-based inverse problems.
arXiv Detail & Related papers (2025-01-28T17:06:09Z) - Operator Learning for Reconstructing Flow Fields from Sparse Measurements: an Energy Transformer Approach [8.156288231122543]
We propose a novel operator learning framework for solving reconstruction problems by using the Energy Transformer.<n>We formulate reconstruction as a mapping from incomplete observed data to full reconstructed fields.<n>Results demonstrate the ability of ET to accurately reconstruct complex flow fields from highly incomplete data.
arXiv Detail & Related papers (2025-01-02T19:24:19Z) - Point Cloud Denoising With Fine-Granularity Dynamic Graph Convolutional Networks [58.050130177241186]
Noise perturbations often corrupt 3-D point clouds, hindering downstream tasks such as surface reconstruction, rendering, and further processing.
This paper introduces finegranularity dynamic graph convolutional networks called GDGCN, a novel approach to denoising in 3-D point clouds.
arXiv Detail & Related papers (2024-11-21T14:19:32Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Learning Physical Simulation with Message Passing Transformer [5.431396242057807]
We propose a new universal architecture based on Graph Neural Network, the Message Passing Transformer, which incorporates a Message Passing framework.
Our architecture achieves significant accuracy improvements in long-term rollouts for both Lagrangian and Eulerian dynamical systems.
arXiv Detail & Related papers (2024-06-10T07:14:56Z) - Gegenbauer Graph Neural Networks for Time-varying Signal Reconstruction [4.6210788730570584]
Time-varying graph signals are a critical problem in machine learning and signal processing with broad applications.
We propose a novel approach that incorporates a learning module to enhance the accuracy of the downstream task.
We conduct extensive experiments on real datasets to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-03-28T19:29:17Z) - GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction [76.35904458027694]
Masked autoencoder models lack good generalization ability on graph data.
We propose a novel graph masked autoencoder framework called GiGaMAE.
Our results will shed light on the design of foundation models on graph-structured data.
arXiv Detail & Related papers (2023-08-18T16:30:51Z) - General Neural Gauge Fields [100.35916421218101]
We develop a learning framework to jointly optimize gauge transformations and neural fields.
We derive an information-invariant gauge transformation which allows to preserve scene information inherently and yield superior performance.
arXiv Detail & Related papers (2023-05-05T12:08:57Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Not Half Bad: Exploring Half-Precision in Graph Convolutional Neural
Networks [8.460826851547294]
efficient graph analysis using modern machine learning is receiving a growing level of attention.
Deep learning approaches often operate over the entire adjacency matrix.
It is desirable to identify efficient measures to reduce both run-time and memory requirements.
arXiv Detail & Related papers (2020-10-23T19:47:42Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.