A Fractional Graph Laplacian Approach to Oversmoothing
- URL: http://arxiv.org/abs/2305.13084v2
- Date: Tue, 31 Oct 2023 14:45:25 GMT
- Title: A Fractional Graph Laplacian Approach to Oversmoothing
- Authors: Sohir Maskey, Raffaele Paolino, Aras Bacho, Gitta Kutyniok
- Abstract summary: We generalize the concept of oversmoothing from undirected to directed graphs.
We propose fractional graph Laplacian neural ODEs, which describe non-local dynamics.
Our method is more flexible with respect to the convergence of the graph's Dirichlet energy, thereby mitigating oversmoothing.
- Score: 15.795926248847026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have shown state-of-the-art performances in
various applications. However, GNNs often struggle to capture long-range
dependencies in graphs due to oversmoothing. In this paper, we generalize the
concept of oversmoothing from undirected to directed graphs. To this aim, we
extend the notion of Dirichlet energy by considering a directed symmetrically
normalized Laplacian. As vanilla graph convolutional networks are prone to
oversmooth, we adopt a neural graph ODE framework. Specifically, we propose
fractional graph Laplacian neural ODEs, which describe non-local dynamics. We
prove that our approach allows propagating information between distant nodes
while maintaining a low probability of long-distance jumps. Moreover, we show
that our method is more flexible with respect to the convergence of the graph's
Dirichlet energy, thereby mitigating oversmoothing. We conduct extensive
experiments on synthetic and real-world graphs, both directed and undirected,
demonstrating our method's versatility across diverse graph homophily levels.
Our code is available at https://github.com/RPaolino/fLode .
Related papers
- Improving Graph Neural Networks by Learning Continuous Edge Directions [0.0]
Graph Neural Networks (GNNs) traditionally employ a message-passing mechanism that resembles diffusion over undirected graphs.
Our key insight is to assign fuzzy edge directions to the edges of a graph so that features can preferentially flow in one direction between nodes.
We propose a general framework, called Continuous Edge Direction (CoED) GNN, for learning on graphs with fuzzy edges.
arXiv Detail & Related papers (2024-10-18T01:34:35Z) - Uplifting the Expressive Power of Graph Neural Networks through Graph
Partitioning [3.236774847052122]
We study the expressive power of graph neural networks through the lens of graph partitioning.
We introduce a novel GNN architecture, namely Graph Partitioning Neural Networks (GPNNs)
arXiv Detail & Related papers (2023-12-14T06:08:35Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Implicit Graph Neural Networks [46.0589136729616]
We propose a graph learning framework called Implicit Graph Neural Networks (IGNN)
IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models.
arXiv Detail & Related papers (2020-09-14T06:04:55Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - The Power of Graph Convolutional Networks to Distinguish Random Graph
Models: Short Version [27.544219236164764]
Graph convolutional networks (GCNs) are a widely used method for graph representation learning.
We investigate the power of GCNs to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
arXiv Detail & Related papers (2020-02-13T17:58:42Z) - Theoretically Expressive and Edge-aware Graph Learning [24.954342094176013]
We propose a new Graph Neural Network that combines recent advancements in the field.
We prove that the model is strictly more general than the Graph Isomorphism Network and the Gated Graph Neural Network.
arXiv Detail & Related papers (2020-01-24T13:43:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.