First-order PDES for Graph Neural Networks: Advection And Burgers Equation Models
- URL: http://arxiv.org/abs/2404.03081v1
- Date: Wed, 3 Apr 2024 21:47:02 GMT
- Title: First-order PDES for Graph Neural Networks: Advection And Burgers Equation Models
- Authors: Yifan Qu, Oliver Krzysik, Hans De Sterck, Omer Ege Kara,
- Abstract summary: This paper presents new Graph Neural Network models that incorporate two first-order Partial Differential Equations (PDEs)
Our experimental findings highlight the capacity of our new PDE model to achieve comparable results with higher-order PDE models and fix the over-smoothing problem up to 64 layers.
Results underscore the adaptability and versatility of GNNs, indicating that unconventional approaches can yield outcomes on par with established techniques.
- Score: 1.4174475093445238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have established themselves as the preferred methodology in a multitude of domains, ranging from computer vision to computational biology, especially in contexts where data inherently conform to graph structures. While many existing methods have endeavored to model GNNs using various techniques, a prevalent challenge they grapple with is the issue of over-smoothing. This paper presents new Graph Neural Network models that incorporate two first-order Partial Differential Equations (PDEs). These models do not increase complexity but effectively mitigate the over-smoothing problem. Our experimental findings highlight the capacity of our new PDE model to achieve comparable results with higher-order PDE models and fix the over-smoothing problem up to 64 layers. These results underscore the adaptability and versatility of GNNs, indicating that unconventional approaches can yield outcomes on par with established techniques.
Related papers
- Graph Neural Reaction Diffusion Models [14.164952387868341]
We propose a novel family of Reaction GNNs based on neural RD systems.
We discuss the theoretical properties of our RDGNN, its implementation, and show that it improves or offers competitive performance to state-of-the-art methods.
arXiv Detail & Related papers (2024-06-16T09:46:58Z) - BG-HGNN: Toward Scalable and Efficient Heterogeneous Graph Neural
Network [6.598758004828656]
Heterogeneous graph neural networks (HGNNs) stand out as a promising neural model class designed for heterogeneous graphs.
Existing HGNNs employ different parameter spaces to model the varied relationships.
This paper introduces Blend&Grind-HGNN, which integrates different relations into a unified feature space manageable by a single set of parameters.
arXiv Detail & Related papers (2024-03-13T03:03:40Z) - GNRK: Graph Neural Runge-Kutta method for solving partial differential
equations [0.0]
This study introduces a novel approach called Graph Neural Runge-Kutta (GNRK)
GNRK integrates graph neural network modules with a recurrent structure inspired by the classical solvers.
It demonstrates the capability to address general PDEs, irrespective of initial conditions or PDE coefficients.
arXiv Detail & Related papers (2023-10-01T08:52:46Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Will More Expressive Graph Neural Networks do Better on Generative
Tasks? [27.412913421460388]
Graph Neural Network (GNN) architectures are often underexplored.
We replace the underlying GNNs of graph generative models with more expressive GNNs.
advanced GNNs can achieve state-of-the-art results across 17 other non-GNN-based graph generative approaches.
arXiv Detail & Related papers (2023-08-23T07:57:45Z) - MGNNI: Multiscale Graph Neural Networks with Implicit Layers [53.75421430520501]
implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
We introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions.
We propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies.
arXiv Detail & Related papers (2022-10-15T18:18:55Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - GRAND: Graph Neural Diffusion [15.00135729657076]
We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process.
In our model, the layer structure and topology correspond to the discretisation choices of temporal and spatial operators.
Key to the success of our models are stability with respect to perturbations in the data and this is addressed for both implicit and explicit discretisation schemes.
arXiv Detail & Related papers (2021-06-21T09:10:57Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.