Graph-wise Common Latent Factor Extraction for Unsupervised Graph
Representation Learning
- URL: http://arxiv.org/abs/2112.08830v1
- Date: Thu, 16 Dec 2021 12:22:49 GMT
- Title: Graph-wise Common Latent Factor Extraction for Unsupervised Graph
Representation Learning
- Authors: Thilini Cooray and Ngai-Man Cheung
- Abstract summary: We propose a new principle for unsupervised graph representation learning: Graph-wise Common latent Factor EXtraction (GCFX)
GCFX explicitly extract common latent factors from an input graph and achieve improved results on downstream tasks to the current state-of-the-art.
Through extensive experiments and analysis, we demonstrate that GCFX is beneficial for graph-level tasks to alleviate distractions caused by local variations of individual nodes or local neighbourhoods.
- Score: 40.70562886682939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised graph-level representation learning plays a crucial role in a
variety of tasks such as molecular property prediction and community analysis,
especially when data annotation is expensive. Currently, most of the
best-performing graph embedding methods are based on Infomax principle. The
performance of these methods highly depends on the selection of negative
samples and hurt the performance, if the samples were not carefully selected.
Inter-graph similarity-based methods also suffer if the selected set of graphs
for similarity matching is low in quality. To address this, we focus only on
utilizing the current input graph for embedding learning. We are motivated by
an observation from real-world graph generation processes where the graphs are
formed based on one or more global factors which are common to all elements of
the graph (e.g., topic of a discussion thread, solubility level of a molecule).
We hypothesize extracting these common factors could be highly beneficial.
Hence, this work proposes a new principle for unsupervised graph representation
learning: Graph-wise Common latent Factor EXtraction (GCFX). We further propose
a deep model for GCFX, deepGCFX, based on the idea of reversing the
above-mentioned graph generation process which could explicitly extract common
latent factors from an input graph and achieve improved results on downstream
tasks to the current state-of-the-art. Through extensive experiments and
analysis, we demonstrate that, while extracting common latent factors is
beneficial for graph-level tasks to alleviate distractions caused by local
variations of individual nodes or local neighbourhoods, it also benefits
node-level tasks by enabling long-range node dependencies, especially for
disassortative graphs.
Related papers
- State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Hierarchical Adaptive Pooling by Capturing High-order Dependency for
Graph Representation Learning [18.423192209359158]
Graph neural networks (GNN) have been proven to be mature enough for handling graph-structured data on node-level graph representation learning tasks.
This paper proposes a hierarchical graph-level representation learning framework, which is adaptively sensitive to graph structures.
arXiv Detail & Related papers (2021-04-13T06:22:24Z) - Accurate Learning of Graph Representations with Graph Multiset Pooling [45.72542969364438]
We propose a Graph Multiset Transformer (GMT) that captures the interaction between nodes according to their structural dependencies.
Our experimental results show that GMT significantly outperforms state-of-the-art graph pooling methods on graph classification benchmarks.
arXiv Detail & Related papers (2021-02-23T07:45:58Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.