Heterogeneous Graph Sparsification for Efficient Representation Learning
- URL: http://arxiv.org/abs/2211.07518v1
- Date: Mon, 14 Nov 2022 16:47:06 GMT
- Title: Heterogeneous Graph Sparsification for Efficient Representation Learning
- Authors: Chandan Chunduru, Chun Jiang Zhu, Blake Gains, and Jinbo Bi
- Abstract summary: We develop sampling-based algorithms for constructing sparsifiers that are provably sparse and preserve important information in the original graphs.
We have performed extensive experiments to confirm that the proposed method can improve time and space complexities of representation learning while achieving comparable, or even better performance in subsequent graph learning tasks.
- Score: 15.075580975708654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph sparsification is a powerful tool to approximate an arbitrary graph and
has been used in machine learning over homogeneous graphs. In heterogeneous
graphs such as knowledge graphs, however, sparsification has not been
systematically exploited to improve efficiency of learning tasks. In this work,
we initiate the study on heterogeneous graph sparsification and develop
sampling-based algorithms for constructing sparsifiers that are provably sparse
and preserve important information in the original graphs. We have performed
extensive experiments to confirm that the proposed method can improve time and
space complexities of representation learning while achieving comparable, or
even better performance in subsequent graph learning tasks based on the learned
embedding.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph
Structure [18.00833762891405]
Graph Lottery Ticket (GLT) Hypothesis: There is an extremely sparse backbone for every graph.
We study 8 key metrics of interest that directly influence the performance of graph learning algorithms.
We propose a straightforward and efficient algorithm for finding these GLTs in arbitrary graphs.
arXiv Detail & Related papers (2023-12-08T00:24:44Z) - Efficiently Learning the Graph for Semi-supervised Learning [4.518012967046983]
We show how to learn the best graphs from the sparse families efficiently using the conjugate gradient method.
Our approach can also be used to learn the graph efficiently online with sub-linear regret, under mild smoothness assumptions.
We implement our approach and demonstrate significant ($sim$10-100x) speedups over prior work on semi-supervised learning with learned graphs on benchmark datasets.
arXiv Detail & Related papers (2023-06-12T13:22:06Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - Adversarial Graph Contrastive Learning with Information Regularization [51.14695794459399]
Contrastive learning is an effective method in graph representation learning.
Data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples.
We propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL)
It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets.
arXiv Detail & Related papers (2022-02-14T05:54:48Z) - Bootstrapping Informative Graph Augmentation via A Meta Learning
Approach [21.814940639910358]
In graph contrastive learning, benchmark methods apply various graph augmentation approaches.
Most of the augmentation methods are non-learnable, which causes the issue of generating unbeneficial augmented graphs.
We motivate our method to generate augmented graph by a learnable graph augmenter, called MEta Graph Augmentation (MEGA)
arXiv Detail & Related papers (2022-01-11T07:15:13Z) - Learning Product Graphs Underlying Smooth Graph Signals [15.023662220197242]
This paper devises a method to learn structured graphs from data that are given in the form of product graphs.
To this end, first the graph learning problem is posed as a linear program, which (on average) outperforms the state-of-the-art graph learning algorithms.
arXiv Detail & Related papers (2020-02-26T03:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.