Graph-wise Common Latent Factor Extraction for Unsupervised Graph
Representation Learning
- URL: http://arxiv.org/abs/2112.08830v1
- Date: Thu, 16 Dec 2021 12:22:49 GMT
- Title: Graph-wise Common Latent Factor Extraction for Unsupervised Graph
Representation Learning
- Authors: Thilini Cooray and Ngai-Man Cheung
- Abstract summary: We propose a new principle for unsupervised graph representation learning: Graph-wise Common latent Factor EXtraction (GCFX)
GCFX explicitly extract common latent factors from an input graph and achieve improved results on downstream tasks to the current state-of-the-art.
Through extensive experiments and analysis, we demonstrate that GCFX is beneficial for graph-level tasks to alleviate distractions caused by local variations of individual nodes or local neighbourhoods.
- Score: 40.70562886682939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised graph-level representation learning plays a crucial role in a
variety of tasks such as molecular property prediction and community analysis,
especially when data annotation is expensive. Currently, most of the
best-performing graph embedding methods are based on Infomax principle. The
performance of these methods highly depends on the selection of negative
samples and hurt the performance, if the samples were not carefully selected.
Inter-graph similarity-based methods also suffer if the selected set of graphs
for similarity matching is low in quality. To address this, we focus only on
utilizing the current input graph for embedding learning. We are motivated by
an observation from real-world graph generation processes where the graphs are
formed based on one or more global factors which are common to all elements of
the graph (e.g., topic of a discussion thread, solubility level of a molecule).
We hypothesize extracting these common factors could be highly beneficial.
Hence, this work proposes a new principle for unsupervised graph representation
learning: Graph-wise Common latent Factor EXtraction (GCFX). We further propose
a deep model for GCFX, deepGCFX, based on the idea of reversing the
above-mentioned graph generation process which could explicitly extract common
latent factors from an input graph and achieve improved results on downstream
tasks to the current state-of-the-art. Through extensive experiments and
analysis, we demonstrate that, while extracting common latent factors is
beneficial for graph-level tasks to alleviate distractions caused by local
variations of individual nodes or local neighbourhoods, it also benefits
node-level tasks by enabling long-range node dependencies, especially for
disassortative graphs.
Related papers
- What makes a good feedforward computational graph? [0.8370225749625163]
We study desirable properties of a feedforward computational graph, discovering two important complementary measures: fidelity and mixing time.
Our study is backed by both theoretical analyses of the metrics' behaviour for various graphs, as well as correlating these metrics to the performance of trained neural network models.
arXiv Detail & Related papers (2025-02-10T18:26:40Z) - Revisiting the Necessity of Graph Learning and Common Graph Benchmarks [2.1125997983972207]
Graph machine learning has enjoyed a meteoric rise in popularity since the introduction of deep learning in graph contexts.
The driving belief is that node features are insufficient for these tasks, so benchmark performance accurately reflects improvements in graph learning.
We show that surprisingly, node features are oftentimes more-than-sufficient for these tasks.
arXiv Detail & Related papers (2024-12-09T03:09:04Z) - Fine-grained Graph Rationalization [51.293401030058085]
We propose fine-grained graph rationalization (FIG) for graph machine learning.
Our idea is driven by the self-attention mechanism, which provides rich interactions between input nodes.
Our experiments involve 7 real-world datasets, and the proposed FIG shows significant performance advantages compared to 13 baseline methods.
arXiv Detail & Related papers (2023-12-13T02:56:26Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Accurate Learning of Graph Representations with Graph Multiset Pooling [45.72542969364438]
We propose a Graph Multiset Transformer (GMT) that captures the interaction between nodes according to their structural dependencies.
Our experimental results show that GMT significantly outperforms state-of-the-art graph pooling methods on graph classification benchmarks.
arXiv Detail & Related papers (2021-02-23T07:45:58Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.