GCN-FFNN: A Two-Stream Deep Model for Learning Solution to Partial
Differential Equations
- URL: http://arxiv.org/abs/2204.13744v1
- Date: Thu, 28 Apr 2022 19:16:31 GMT
- Title: GCN-FFNN: A Two-Stream Deep Model for Learning Solution to Partial
Differential Equations
- Authors: Onur Bilgin, Thomas Vergutz, Siamak Mehrkanoon
- Abstract summary: This paper introduces a novel two-stream deep model based on graph convolutional network (GCN) architecture and feed-forward neural networks (FFNN)
The proposed GCN-FFNN model learns from two types of input representations, i.e. grid and graph data, obtained via the discretization of the PDE domain.
The obtained numerical results demonstrate the applicability and efficiency of the proposed GCN-FFNN model over individual GCN and FFNN models.
- Score: 3.5665681694253903
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper introduces a novel two-stream deep model based on graph
convolutional network (GCN) architecture and feed-forward neural networks
(FFNN) for learning the solution of nonlinear partial differential equations
(PDEs). The model aims at incorporating both graph and grid input
representations using two streams corresponding to GCN and FFNN models,
respectively. Each stream layer receives and processes its own input
representation. As opposed to FFNN which receives a grid-like structure, the
GCN stream layer operates on graph input data where the neighborhood
information is incorporated through the adjacency matrix of the graph. In this
way, the proposed GCN-FFNN model learns from two types of input
representations, i.e. grid and graph data, obtained via the discretization of
the PDE domain. The GCN-FFNN model is trained in two phases. In the first
phase, the model parameters of each stream are trained separately. Both streams
employ the same error function to adjust their parameters by enforcing the
models to satisfy the given PDE as well as its initial and boundary conditions
on grid or graph collocation (training) data. In the second phase, the learned
parameters of two-stream layers are frozen and their learned representation
solutions are fed to fully connected layers whose parameters are learned using
the previously used error function. The learned GCN-FFNN model is tested on
test data located both inside and outside the PDE domain. The obtained
numerical results demonstrate the applicability and efficiency of the proposed
GCN-FFNN model over individual GCN and FFNN models on 1D-Burgers,
1D-Schr\"odinger, 2D-Burgers and 2D-Schr\"odinger equations.
Related papers
- CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Enhancing Data-Assimilation in CFD using Graph Neural Networks [0.0]
We present a novel machine learning approach for data assimilation applied in fluid mechanics, based on adjoint-optimization augmented by Graph Neural Networks (GNNs) models.
We obtain our results using direct numerical simulations based on a Finite Element Method (FEM) solver; a two-fold interface between the GNN model and the solver allows the GNN's predictions to be incorporated into post-processing steps of the FEM analysis.
arXiv Detail & Related papers (2023-11-29T19:11:40Z) - A deep learning approach to solve forward differential problems on
graphs [6.756351172952362]
We propose a novel deep learning approach to solve one-dimensional non-linear elliptic, parabolic, and hyperbolic problems on graphs.
A system of physics-informed neural network (PINN) models is used to solve the differential equations.
arXiv Detail & Related papers (2022-10-07T16:06:42Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Deep Neural Network Modeling of Unknown Partial Differential Equations
in Nodal Space [1.8010196131724825]
We present a framework for deep neural network (DNN) modeling of unknown time-dependent partial differential equations (PDE) using trajectory data.
We present a DNN structure that has a direct correspondence to the evolution operator of the underlying PDE.
A trained DNN defines a predictive model for the underlying unknown PDE over structureless grids.
arXiv Detail & Related papers (2021-06-07T13:27:09Z) - Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs [77.33781731432163]
We learn dynamic graph representation in hyperbolic space, for the first time, which aims to infer node representations.
We present a novel Hyperbolic Variational Graph Network, referred to as HVGNN.
In particular, to model the dynamics, we introduce a Temporal GNN (TGNN) based on a theoretically grounded time encoding approach.
arXiv Detail & Related papers (2021-04-06T01:44:15Z) - Modeling Gate-Level Abstraction Hierarchy Using Graph Convolutional
Neural Networks to Predict Functional De-Rating Factors [0.0]
The paper proposes a methodology for modeling a gate-level netlist using a Graph Convolutional Network (GCN)
The model predicts the overall functional de-rating factors of sequential elements of a given circuit.
arXiv Detail & Related papers (2021-04-05T08:38:16Z) - CopulaGNN: Towards Integrating Representational and Correlational Roles
of Graphs in Graph Neural Networks [23.115288017590093]
We investigate how Graph Neural Network (GNN) models can effectively leverage both types of information.
The proposed Copula Graph Neural Network (CopulaGNN) can take a wide range of GNN models as base models.
arXiv Detail & Related papers (2020-10-05T15:20:04Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.