Multiscale Graph Neural Network Autoencoders for Interpretable
Scientific Machine Learning
- URL: http://arxiv.org/abs/2302.06186v2
- Date: Thu, 16 Feb 2023 00:26:32 GMT
- Title: Multiscale Graph Neural Network Autoencoders for Interpretable
Scientific Machine Learning
- Authors: Shivam Barwey, Varun Shankar, Romit Maulik
- Abstract summary: The goal of this work is to address two limitations in autoencoder-based models: latent space interpretability and compatibility with unstructured meshes.
This is accomplished here with the development of a novel graph neural network (GNN) autoencoding architecture with demonstrations on complex fluid flow applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of this work is to address two limitations in autoencoder-based
models: latent space interpretability and compatibility with unstructured
meshes. This is accomplished here with the development of a novel graph neural
network (GNN) autoencoding architecture with demonstrations on complex fluid
flow applications. To address the first goal of interpretability, the GNN
autoencoder achieves reduction in the number nodes in the encoding stage
through an adaptive graph reduction procedure. This reduction procedure
essentially amounts to flowfield-conditioned node sampling and sensor
identification, and produces interpretable latent graph representations
tailored to the flowfield reconstruction task in the form of so-called masked
fields. These masked fields allow the user to (a) visualize where in physical
space a given latent graph is active, and (b) interpret the time-evolution of
the latent graph connectivity in accordance with the time-evolution of unsteady
flow features (e.g. recirculation zones, shear layers) in the domain. To
address the goal of unstructured mesh compatibility, the autoencoding
architecture utilizes a series of multi-scale message passing (MMP) layers,
each of which models information exchange among node neighborhoods at various
lengthscales. The MMP layer, which augments standard single-scale message
passing with learnable coarsening operations, allows the decoder to more
efficiently reconstruct the flowfield from the identified regions in the masked
fields. Analysis of latent graphs produced by the autoencoder for various model
settings are conducted using using unstructured snapshot data sourced from
large-eddy simulations in a backward-facing step (BFS) flow configuration with
an OpenFOAM-based flow solver at high Reynolds numbers.
Related papers
- Scalable Weibull Graph Attention Autoencoder for Modeling Document Networks [50.42343781348247]
We develop a graph Poisson factor analysis (GPFA) which provides analytic conditional posteriors to improve the inference accuracy.
We also extend GPFA to a multi-stochastic-layer version named graph Poisson gamma belief network (GPGBN) to capture the hierarchical document relationships at multiple semantic levels.
Our models can extract high-quality hierarchical latent document representations and achieve promising performance on various graph analytic tasks.
arXiv Detail & Related papers (2024-10-13T02:22:14Z) - Mesh-based Super-Resolution of Fluid Flows with Multiscale Graph Neural Networks [0.0]
A graph neural network (GNN) approach is introduced in this work which enables mesh-based three-dimensional super-resolution of fluid flows.
In this framework, the GNN is designed to operate not on the full mesh-based field at once, but on localized meshes of elements (or cells) directly.
arXiv Detail & Related papers (2024-09-12T05:52:19Z) - Predicting Transonic Flowfields in Non-Homogeneous Unstructured Grids Using Autoencoder Graph Convolutional Networks [0.0]
This paper focuses on addressing challenges posed by non-homogeneous unstructured grids, commonly used in Computational Fluid Dynamics (CFD)
The core of our approach centers on geometric deep learning, specifically the utilization of graph convolutional network (GCN)
The novel Autoencoder GCN architecture enhances prediction accuracy by propagating information to distant nodes and emphasizing influential points.
arXiv Detail & Related papers (2024-05-07T15:18:21Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Interpretable A-posteriori Error Indication for Graph Neural Network Surrogate Models [0.0]
This work introduces an interpretability enhancement procedure for graph neural networks (GNNs)
The end result is an interpretable GNN model that isolates regions in physical space, corresponding to sub-graphs, that are intrinsically linked to the forecasting task.
The interpretable GNNs can also be used to identify, during inference, graph nodes that correspond to a majority of the anticipated forecasting error.
arXiv Detail & Related papers (2023-11-13T18:37:07Z) - Identification of vortex in unstructured mesh with graph neural networks [0.0]
We present a Graph Neural Network (GNN) based model with U-Net architecture to identify the vortex in CFD results on unstructured meshes.
A vortex auto-labeling method is proposed to label vortex regions in 2D CFD meshes.
arXiv Detail & Related papers (2023-11-11T12:10:16Z) - A Graph Encoder-Decoder Network for Unsupervised Anomaly Detection [7.070726553564701]
We propose an unsupervised graph encoder-decoder model to detect abnormal nodes from graphs.
In the encoding stage, we design a novel pooling mechanism, named LCPool, to find a cluster assignment matrix.
In the decoding stage, we propose an unpooling operation, called LCUnpool, to reconstruct both the structure and nodal features of the original graph.
arXiv Detail & Related papers (2023-08-15T13:49:12Z) - From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks [68.8204255655161]
We propose a new type of continuous-time control system, called AutoencODE, based on a controlled field that drives dynamics.
We show that many architectures can be recovered in regions where the loss function is locally convex.
arXiv Detail & Related papers (2023-07-05T13:26:17Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Data-Driven Learning of Geometric Scattering Networks [74.3283600072357]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2020-10-06T01:20:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.