Beyond Smoothing: Unsupervised Graph Representation Learning with Edge
Heterophily Discriminating
- URL: http://arxiv.org/abs/2211.14065v1
- Date: Fri, 25 Nov 2022 12:39:41 GMT
- Title: Beyond Smoothing: Unsupervised Graph Representation Learning with Edge
Heterophily Discriminating
- Authors: Yixin Liu, Yizhen Zheng, Daokun Zhang, Vincent CS Lee, Shirui Pan
- Abstract summary: We propose a novel unsupervised Graph Representation learning method with Edge hEterophily discriminaTing (GREET)
GREET learns representations by discriminating and leveraging homophilic edges and heterophilic edges.
We conducted extensive experiments on 14 benchmark datasets and multiple learning scenarios to demonstrate the superiority of GRET.
- Score: 40.916070587912785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised graph representation learning (UGRL) has drawn increasing
research attention and achieved promising results in several graph analytic
tasks. Relying on the homophily assumption, existing UGRL methods tend to
smooth the learned node representations along all edges, ignoring the existence
of heterophilic edges that connect nodes with distinct attributes. As a result,
current methods are hard to generalize to heterophilic graphs where dissimilar
nodes are widely connected, and also vulnerable to adversarial attacks. To
address this issue, we propose a novel unsupervised Graph Representation
learning method with Edge hEterophily discriminaTing (GREET) which learns
representations by discriminating and leveraging homophilic edges and
heterophilic edges. To distinguish two types of edges, we build an edge
discriminator that infers edge homophily/heterophily from feature and structure
information. We train the edge discriminator in an unsupervised way through
minimizing the crafted pivot-anchored ranking loss, with randomly sampled node
pairs acting as pivots. Node representations are learned through contrasting
the dual-channel encodings obtained from the discriminated homophilic and
heterophilic edges. With an effective interplaying scheme, edge discriminating
and representation learning can mutually boost each other during the training
phase. We conducted extensive experiments on 14 benchmark datasets and multiple
learning scenarios to demonstrate the superiority of GREET.
Related papers
- When Heterophily Meets Heterogeneous Graphs: Latent Graphs Guided Unsupervised Representation Learning [6.2167203720326025]
Unsupervised heterogeneous graph representation learning (UHGRL) has gained increasing attention due to its significance in handling practical graphs without labels.
We define semantic heterophily and propose an innovative framework called Latent Graphs Guided Unsupervised Representation Learning (LatGRL) to handle this problem.
arXiv Detail & Related papers (2024-09-01T10:25:06Z) - Bootstrap Latents of Nodes and Neighbors for Graph Self-Supervised Learning [27.278097015083343]
Contrastive learning requires negative samples to prevent model collapse and learn discriminative representations.
We introduce a cross-attention module to predict the supportiveness score of a neighbor with respect to the anchor node.
Our method mitigates class collision from negative and noisy positive samples, concurrently enhancing intra-class compactness.
arXiv Detail & Related papers (2024-08-09T14:17:52Z) - Heterophilous Distribution Propagation for Graph Neural Networks [23.897535976924722]
We propose heterophilous distribution propagation (HDP) for graph neural networks.
Instead of aggregating information from all neighborhoods, HDP adaptively separates the neighbors into homophilous and heterphilous parts.
We conduct extensive experiments on 9 benchmark datasets with different levels of homophily.
arXiv Detail & Related papers (2024-05-31T06:40:56Z) - Heterophily-Aware Graph Attention Network [42.640057865981156]
Graph Neural Networks (GNNs) have shown remarkable success in graph representation learning.
Existing heterophilic GNNs tend to ignore the modeling of heterophily of each edge, which is also a vital part in tackling the heterophily problem.
We propose a novel Heterophily-Aware Graph Attention Network (HA-GAT) by fully exploring and utilizing the local distribution as the underlying heterophily.
arXiv Detail & Related papers (2023-02-07T03:21:55Z) - Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Geometry Contrastive Learning on Heterogeneous Graphs [50.58523799455101]
This paper proposes a novel self-supervised learning method, termed as Geometry Contrastive Learning (GCL)
GCL views a heterogeneous graph from Euclidean and hyperbolic perspective simultaneously, aiming to make a strong merger of the ability of modeling rich semantics and complex structures.
Extensive experiments on four benchmarks data sets show that the proposed approach outperforms the strong baselines.
arXiv Detail & Related papers (2022-06-25T03:54:53Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Geometric Graph Representation Learning via Maximizing Rate Reduction [73.6044873825311]
Learning node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
We propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner.
G2R maps nodes in distinct groups into different subspaces, while each subspace is compact and different subspaces are dispersed.
arXiv Detail & Related papers (2022-02-13T07:46:24Z) - AttrE2vec: Unsupervised Attributed Edge Representation Learning [22.774159996012276]
This paper proposes a novel unsupervised inductive method called AttrE2Vec, which learns a low-dimensional vector representation for edges in attributed networks.
Experimental results show that, compared to contemporary approaches, our method builds more powerful edge vector representations.
arXiv Detail & Related papers (2020-12-29T12:20:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.