Improving Graph Neural Networks on Multi-node Tasks with Labeling Tricks
- URL: http://arxiv.org/abs/2304.10074v1
- Date: Thu, 20 Apr 2023 04:03:40 GMT
- Title: Improving Graph Neural Networks on Multi-node Tasks with Labeling Tricks
- Authors: Xiyuan Wang, Pan Li, Muhan Zhang
- Abstract summary: We propose textlabeling trick, which first labels nodes in the graph according to their relationships with the target node set before applying a GNN.
Our work explains the superior performance of previous node-labeling-based methods and establishes a theoretical foundation for using GNNs for multi-node representation learning.
- Score: 14.41064333206723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we provide a theory of using graph neural networks (GNNs) for
\textit{multi-node representation learning}, where we are interested in
learning a representation for a set of more than one node such as a link.
Existing GNNs are mainly designed to learn single-node representations. When we
want to learn a node-set representation involving multiple nodes, a common
practice in previous works is to directly aggregate the single-node
representations obtained by a GNN. In this paper, we show a fundamental
limitation of such an approach, namely the inability to capture the dependence
among multiple nodes in a node set, and argue that directly aggregating
individual node representations fails to produce an effective joint
representation for multiple nodes. A straightforward solution is to distinguish
target nodes from others. Formalizing this idea, we propose \text{labeling
trick}, which first labels nodes in the graph according to their relationships
with the target node set before applying a GNN and then aggregates node
representations obtained in the labeled graph for multi-node representations.
The labeling trick also unifies a few previous successful works for multi-node
representation learning, including SEAL, Distance Encoding, ID-GNN, and NBFNet.
Besides node sets in graphs, we also extend labeling tricks to posets, subsets
and hypergraphs. Experiments verify that the labeling trick technique can boost
GNNs on various tasks, including undirected link prediction, directed link
prediction, hyperedge prediction, and subgraph prediction. Our work explains
the superior performance of previous node-labeling-based methods and
establishes a theoretical foundation for using GNNs for multi-node
representation learning.
Related papers
- Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Position-based Hash Embeddings For Scaling Graph Neural Networks [8.87527266373087]
Graph Neural Networks (GNNs) compute node representations by taking into account the topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer to compute node embeddings and use them as input features.
To reduce the memory associated with this embedding layer, hashing-based approaches, commonly used in applications like NLP and recommender systems, can potentially be used.
We present approaches that take advantage of the nodes' position in the graph to dramatically reduce the memory required.
arXiv Detail & Related papers (2021-08-31T22:42:25Z) - Dynamic Labeling for Unlabeled Graph Neural Networks [34.65037955481084]
Graph neural networks (GNNs) rely on node embeddings to represent a node as a vector by its identity, type, or content.
Existing GNNs either assign random labels to nodes or assign one embedding to all nodes, which fails to distinguish one node from another.
In this paper, we analyze the limitation of existing approaches in two types of classification tasks, graph classification and node classification.
arXiv Detail & Related papers (2021-02-23T04:30:35Z) - Pair-view Unsupervised Graph Representation Learning [2.8650714782703366]
Low-dimension graph embeddings have proved extremely useful in various downstream tasks in large graphs.
This paper pro-poses PairE, a solution to use "pair", a higher level unit than a "node" as the core for graph embeddings.
Experiment results show that PairE consistently outperforms the state of baselines in all four downstream tasks.
arXiv Detail & Related papers (2020-12-11T04:09:47Z) - Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node
Representation Learning [26.94699471990803]
We provide a theory of using graph neural networks (GNNs) for multi-node representation learning.
A common practice in previous works is to directly aggregate the single-node representations obtained by a GNN into a joint node set representation.
We unify these node labeling techniques into a single and most general form -- labeling trick.
arXiv Detail & Related papers (2020-10-30T07:04:11Z) - Multi-grained Semantics-aware Graph Neural Networks [13.720544777078642]
Graph Neural Networks (GNNs) are powerful techniques in representation learning for graphs.
This work proposes a unified model, AdamGNN, to interactively learn node and graph representations.
Experiments on 14 real-world graph datasets show that AdamGNN can significantly outperform 17 competing models on both node- and graph-wise tasks.
arXiv Detail & Related papers (2020-10-01T07:52:06Z) - CatGCN: Graph Convolutional Networks with Categorical Node Features [99.555850712725]
CatGCN is tailored for graph learning when the node features are categorical.
We train CatGCN in an end-to-end fashion and demonstrate it on semi-supervised node classification.
arXiv Detail & Related papers (2020-09-11T09:25:17Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.