Co-embedding of Nodes and Edges with Graph Neural Networks
- URL: http://arxiv.org/abs/2010.13242v1
- Date: Sun, 25 Oct 2020 22:39:31 GMT
- Title: Co-embedding of Nodes and Edges with Graph Neural Networks
- Authors: Xiaodong Jiang, Ronghang Zhu, Pengsheng Ji, Sheng Li
- Abstract summary: Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
- Score: 13.020745622327894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph, as an important data representation, is ubiquitous in many real world
applications ranging from social network analysis to biology. How to correctly
and effectively learn and extract information from graph is essential for a
large number of machine learning tasks. Graph embedding is a way to transform
and encode the data structure in high dimensional and non-Euclidean feature
space to a low dimensional and structural space, which is easily exploited by
other machine learning algorithms. We have witnessed a huge surge of such
embedding methods, from statistical approaches to recent deep learning methods
such as the graph convolutional networks (GCN). Deep learning approaches
usually outperform the traditional methods in most graph learning benchmarks by
building an end-to-end learning framework to optimize the loss function
directly. However, most of the existing GCN methods can only perform
convolution operations with node features, while ignoring the handy information
in edge features, such as relations in knowledge graphs. To address this
problem, we present CensNet, Convolution with Edge-Node Switching graph neural
network, for learning tasks in graph-structured data with both node and edge
features. CensNet is a general graph embedding framework, which embeds both
nodes and edges to a latent feature space. By using line graph of the original
undirected graph, the role of nodes and edges are switched, and two novel graph
convolution operations are proposed for feature propagation. Experimental
results on real-world academic citation networks and quantum chemistry graphs
show that our approach achieves or matches the state-of-the-art performance in
four graph learning tasks, including semi-supervised node classification,
multi-task graph classification, graph regression, and link prediction.
Related papers
- SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Edge-Featured Graph Attention Network [7.0629162428807115]
We present edge-featured graph attention networks (EGATs) to extend the use of graph neural networks to those tasks learning on graphs with both node and edge features.
By reforming the model structure and the learning process, the new models can accept node and edge features as inputs, incorporate the edge information into feature representations, and iterate both node and edge features in a parallel but mutual way.
arXiv Detail & Related papers (2021-01-19T15:08:12Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Machine Learning on Graphs: A Model and Comprehensive Taxonomy [22.73365477040205]
We bridge the gap between graph neural networks, network embedding and graph regularization models.
Specifically, we propose a Graph Decoder Model (GRAPHEDM), which generalizes popular algorithms for semi-supervised learning on graphs.
arXiv Detail & Related papers (2020-05-07T18:00:02Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.