Gossip and Attend: Context-Sensitive Graph Representation Learning
- URL: http://arxiv.org/abs/2004.00413v1
- Date: Mon, 30 Mar 2020 18:23:26 GMT
- Title: Gossip and Attend: Context-Sensitive Graph Representation Learning
- Authors: Zekarias T. Kefato, Sarunas Girdzijauskas
- Abstract summary: Graph representation learning (GRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and often sparse graphs.
We propose GOAT, a context-sensitive algorithm inspired by gossip communication and a mutual attention mechanism simply over the structure of the graph.
- Score: 0.5493410630077189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph representation learning (GRL) is a powerful technique for learning
low-dimensional vector representation of high-dimensional and often sparse
graphs. Most studies explore the structure and metadata associated with the
graph using random walks and employ an unsupervised or semi-supervised learning
schemes. Learning in these methods is context-free, resulting in only a single
representation per node. Recently studies have argued on the adequacy of a
single representation and proposed context-sensitive approaches, which are
capable of extracting multiple node representations for different contexts.
This proved to be highly effective in applications such as link prediction and
ranking.
However, most of these methods rely on additional textual features that
require complex and expensive RNNs or CNNs to capture high-level features or
rely on a community detection algorithm to identify multiple contexts of a
node.
In this study we show that in-order to extract high-quality context-sensitive
node representations it is not needed to rely on supplementary node features,
nor to employ computationally heavy and complex models. We propose GOAT, a
context-sensitive algorithm inspired by gossip communication and a mutual
attention mechanism simply over the structure of the graph. We show the
efficacy of GOAT using 6 real-world datasets on link prediction and node
clustering tasks and compare it against 12 popular and state-of-the-art (SOTA)
baselines. GOAT consistently outperforms them and achieves up to 12% and 19%
gain over the best performing methods on link prediction and clustering tasks,
respectively.
Related papers
- Redundancy-Free Self-Supervised Relational Learning for Graph Clustering [13.176413653235311]
We propose a novel self-supervised deep graph clustering method named Redundancy-Free Graph Clustering (R$2$FGC)
It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder.
Our experiments are performed on widely used benchmark datasets to validate the superiority of our R$2$FGC over state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-09T06:18:50Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Inferential SIR-GN: Scalable Graph Representation Learning [0.4699313647907615]
Graph representation learning methods generate numerical vector representations for the nodes in a network.
In this work, we propose Inferential SIR-GN, a model which is pre-trained on random graphs, then computes node representations rapidly.
We demonstrate that the model is able to capture node's structural role information, and show excellent performance at node and graph classification tasks, on unseen networks.
arXiv Detail & Related papers (2021-11-08T20:56:37Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Learning on Attribute-Missing Graphs [66.76561524848304]
There is a graph where attributes of only partial nodes could be available and those of the others might be entirely missing.
Existing graph learning methods including the popular GNN cannot provide satisfied learning performance.
We develop a novel distribution matching based GNN called structure-attribute transformer (SAT) for attribute-missing graphs.
arXiv Detail & Related papers (2020-11-03T11:09:52Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z) - Which way? Direction-Aware Attributed Graph Embedding [2.429993132301275]
Graph embedding algorithms are used to efficiently represent a graph in a continuous vector space.
One aspect that is often overlooked is whether the graph is directed or not.
This study presents a novel text-enriched, direction-aware algorithm called DIAGRAM.
arXiv Detail & Related papers (2020-01-30T13:08:19Z) - Graph Neighborhood Attentive Pooling [0.5493410630077189]
Network representation learning (NRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and sparse graphs.
We propose a novel context-sensitive algorithm called GAP that learns to attend on different parts of a node's neighborhood using attentive pooling networks.
arXiv Detail & Related papers (2020-01-28T15:05:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.