Revealing Decurve Flows for Generalized Graph Propagation
- URL: http://arxiv.org/abs/2402.08480v1
- Date: Tue, 13 Feb 2024 14:13:17 GMT
- Title: Revealing Decurve Flows for Generalized Graph Propagation
- Authors: Chen Lin, Liheng Ma, Yiyang Chen, Wanli Ouyang, Michael M. Bronstein
and Philip H.S. Torr
- Abstract summary: This study addresses the limitations of the traditional analysis of message-passing, central to graph learning, by defining em textbfgeneralized propagation with directed and weighted graphs.
We include a preliminary exploration of learned propagation patterns in datasets, a first in the field.
- Score: 108.80758541147418
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study addresses the limitations of the traditional analysis of
message-passing, central to graph learning, by defining {\em
\textbf{generalized propagation}} with directed and weighted graphs. The
significance manifest in two ways. \textbf{Firstly}, we propose {\em
Generalized Propagation Neural Networks} (\textbf{GPNNs}), a framework that
unifies most propagation-based graph neural networks. By generating
directed-weighted propagation graphs with adjacency function and connectivity
function, GPNNs offer enhanced insights into attention mechanisms across
various graph models. We delve into the trade-offs within the design space with
empirical experiments and emphasize the crucial role of the adjacency function
for model expressivity via theoretical analysis. \textbf{Secondly}, we propose
the {\em Continuous Unified Ricci Curvature} (\textbf{CURC}), an extension of
celebrated {\em Ollivier-Ricci Curvature} for directed and weighted graphs.
Theoretically, we demonstrate that CURC possesses continuity, scale invariance,
and a lower bound connection with the Dirichlet isoperimetric constant
validating bottleneck analysis for GPNNs. We include a preliminary exploration
of learned propagation patterns in datasets, a first in the field. We observe
an intriguing ``{\em \textbf{decurve flow}}'' - a curvature reduction during
training for models with learnable propagation, revealing the evolution of
propagation over time and a deeper connection to over-smoothing and bottleneck
trade-off.
Related papers
- Injecting Hamiltonian Architectural Bias into Deep Graph Networks for Long-Range Propagation [55.227976642410766]
dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning.
Motivated by this, we introduce (port-)Hamiltonian Deep Graph Networks.
We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors.
arXiv Detail & Related papers (2024-05-27T13:36:50Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - MixupExplainer: Generalizing Explanations for Graph Neural Networks with
Data Augmentation [6.307753856507624]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We shed light on the existence of the distribution shifting issue in existing methods, which affects explanation quality.
arXiv Detail & Related papers (2023-07-15T15:46:38Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - A Graph Data Augmentation Strategy with Entropy Preserving [11.886325179121226]
We introduce a novel graph entropy definition as a quantitative index to evaluate feature information among a graph.
Under considerations of preserving graph entropy, we propose an effective strategy to generate training data using a perturbed mechanism.
Our proposed approach significantly enhances the robustness and generalization ability of GCNs during the training process.
arXiv Detail & Related papers (2021-07-13T12:58:32Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.