Position-aware Structure Learning for Graph Topology-imbalance by
Relieving Under-reaching and Over-squashing
- URL: http://arxiv.org/abs/2208.08302v1
- Date: Wed, 17 Aug 2022 14:04:21 GMT
- Title: Position-aware Structure Learning for Graph Topology-imbalance by
Relieving Under-reaching and Over-squashing
- Authors: Qingyun Sun, Jianxin Li, Haonan Yuan, Xingcheng Fu, Hao Peng, Cheng
Ji, Qian Li, Philip S. Yu
- Abstract summary: Topology-imbalance is a graph-specific imbalance problem caused by the uneven topology positions of labeled nodes.
We propose a novel position-aware graph structure learning framework named PASTEL.
Our key insight is to enhance the connectivity of nodes within the same class for more supervision information.
- Score: 67.83086131278904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Topology-imbalance is a graph-specific imbalance problem caused by the uneven
topology positions of labeled nodes, which significantly damages the
performance of GNNs. What topology-imbalance means and how to measure its
impact on graph learning remain under-explored. In this paper, we provide a new
understanding of topology-imbalance from a global view of the supervision
information distribution in terms of under-reaching and over-squashing, which
motivates two quantitative metrics as measurements. In light of our analysis,
we propose a novel position-aware graph structure learning framework named
PASTEL, which directly optimizes the information propagation path and solves
the topology-imbalance issue in essence. Our key insight is to enhance the
connectivity of nodes within the same class for more supervision information,
thereby relieving the under-reaching and over-squashing phenomena.
Specifically, we design an anchor-based position encoding mechanism, which
better incorporates relative topology position and enhances the intra-class
inductive bias by maximizing the label influence. We further propose a
class-wise conflict measure as the edge weights, which benefits the separation
of different node classes. Extensive experiments demonstrate the superior
potential and adaptability of PASTEL in enhancing GNNs' power in different data
annotation scenarios.
Related papers
- When Witnesses Defend: A Witness Graph Topological Layer for Adversarial Graph Learning [19.566775406771757]
We bridge adversarial graph learning with the emerging tools from computational topology, namely, persistent homology representations of graphs.
We introduce the concept of witness complex to adversarial analysis on graphs, which allows us to focus only on the salient shape characteristics of graphs, with minimal loss of topological information on the whole graph.
Armed with the witness mechanism, we design Witness Graph Topological Layer (WGTL), which systematically integrates both local and global topological graph feature representations, the impact of which is, in turn, automatically controlled by the robust regularized topological loss.
arXiv Detail & Related papers (2024-09-21T14:53:32Z) - What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - Characterizing the Influence of Topology on Graph Learning Tasks [47.48010635921621]
Graph neural networks (GNNs) have achieved remarkable success in a wide range of tasks by encoding features combined with topology to create effective representations.
We propose a metric, TopoInf, which characterizes the influence of graph topology by measuring the level of compatibility between the topological information of graph data and downstream task objectives.
arXiv Detail & Related papers (2024-04-11T06:04:06Z) - On the Topology Awareness and Generalization Performance of Graph Neural Networks [6.598758004828656]
We introduce a comprehensive framework to characterize the topology awareness of GNNs across any topological feature.
We conduct a case study using the intrinsic graph metric the shortest path distance on various benchmark datasets.
arXiv Detail & Related papers (2024-03-07T13:33:30Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Topology-Imbalance Learning for Semi-Supervised Node Classification [34.964665078512596]
We argue that graph data expose a unique source of imbalance from the asymmetric topological properties of the labeled nodes.
We devise an influence conflict detection -- based metric Totoro to measure the degree of graph topology imbalance.
We propose a model-agnostic method ReNode to address the topology-imbalance issue.
arXiv Detail & Related papers (2021-10-08T12:57:38Z) - A Deep Graph Neural Networks Architecture Design: From Global
Pyramid-like Shrinkage Skeleton to Local Topology Link Rewiring [1.455240131708017]
We propose a three-pipeline training framework based on critical expressivity, including global model contraction, weight evolution, and link's weight rewiring.
We analyze the reason for the modularity (clustering) phenomenon in network topology and use it to rewire potential erroneous weighted links.
The architecture design on GNNs, in turn, verifies the expressivity of GNNs from dynamics and topological space aspects.
arXiv Detail & Related papers (2020-12-16T03:14:31Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.