Node Representation Learning in Graph via Node-to-Neighbourhood Mutual
Information Maximization
- URL: http://arxiv.org/abs/2203.12265v1
- Date: Wed, 23 Mar 2022 08:21:10 GMT
- Title: Node Representation Learning in Graph via Node-to-Neighbourhood Mutual
Information Maximization
- Authors: Wei Dong, Junsheng Wu, Yi Luo, Zongyuan Ge, Peng Wang
- Abstract summary: Key towards learning informative node representations in graphs lies in how to gain contextual information from the neighbourhood.
We present a self-supervised node representation learning strategy via directly maximizing the mutual information between the hidden representations of nodes and their neighbourhood.
Our framework is optimized via a surrogate contrastive loss, where the positive selection underpins the quality and efficiency of representation learning.
- Score: 27.701736055800314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The key towards learning informative node representations in graphs lies in
how to gain contextual information from the neighbourhood. In this work, we
present a simple-yet-effective self-supervised node representation learning
strategy via directly maximizing the mutual information between the hidden
representations of nodes and their neighbourhood, which can be theoretically
justified by its link to graph smoothing. Following InfoNCE, our framework is
optimized via a surrogate contrastive loss, where the positive selection
underpins the quality and efficiency of representation learning. To this end,
we propose a topology-aware positive sampling strategy, which samples positives
from the neighbourhood by considering the structural dependencies between nodes
and thus enables positive selection upfront. In the extreme case when only one
positive is sampled, we fully avoid expensive neighbourhood aggregation. Our
methods achieve promising performance on various node classification datasets.
It is also worth mentioning by applying our loss function to MLP based node
encoders, our methods can be orders of faster than existing solutions. Our
codes and supplementary materials are available at
https://github.com/dongwei156/n2n.
Related papers
- Contrastive Graph Representation Learning with Adversarial Cross-view Reconstruction and Information Bottleneck [5.707725771108279]
We propose an effective Contrastive Graph Representation Learning with Adversarial Cross-view Reconstruction and Information Bottleneck (CGRL) for node classification.
Our method significantly outperforms existing state-of-the-art algorithms.
arXiv Detail & Related papers (2024-08-01T05:45:21Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Self-Supervised Node Representation Learning via Node-to-Neighbourhood
Alignment [10.879056662671802]
Self-supervised node representation learning aims to learn node representations from unlabelled graphs that rival the supervised counterparts.
In this work, we present simple-yet-effective self-supervised node representation learning via aligning the hidden representations of nodes and their neighbourhood.
We learn node representations that achieve promising node classification performance on a set of graph-structured datasets from small- to large-scale.
arXiv Detail & Related papers (2023-02-09T13:21:18Z) - STERLING: Synergistic Representation Learning on Bipartite Graphs [78.86064828220613]
A fundamental challenge of bipartite graph representation learning is how to extract node embeddings.
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
We introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs.
arXiv Detail & Related papers (2023-01-25T03:21:42Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Graph Convolutional Neural Networks with Diverse Negative Samples via
Decomposed Determinant Point Processes [21.792376993468064]
Graph convolutional networks (GCNs) have achieved great success in graph representation learning.
In this paper, we use quality-diversity decomposition in determinant point processes to obtain diverse negative samples.
We propose a new shortest-path-base method to improve computational efficiency.
arXiv Detail & Related papers (2022-12-05T06:31:31Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - When Contrastive Learning Meets Active Learning: A Novel Graph Active
Learning Paradigm with Self-Supervision [19.938379604834743]
This paper studies active learning (AL) on graphs, whose purpose is to discover the most informative nodes to maximize the performance of graph neural networks (GNNs)
Motivated by the success of contrastive learning (CL), we propose a novel paradigm that seamlessly integrates graph AL with CL.
Comprehensive, confounding-free experiments on five public datasets demonstrate the superiority of our method over state-of-the-arts.
arXiv Detail & Related papers (2020-10-30T06:20:07Z) - Graph Neighborhood Attentive Pooling [0.5493410630077189]
Network representation learning (NRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and sparse graphs.
We propose a novel context-sensitive algorithm called GAP that learns to attend on different parts of a node's neighborhood using attentive pooling networks.
arXiv Detail & Related papers (2020-01-28T15:05:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.