The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph
Structure
- URL: http://arxiv.org/abs/2312.04762v1
- Date: Fri, 8 Dec 2023 00:24:44 GMT
- Title: The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph
Structure
- Authors: Anton Tsitsulin and Bryan Perozzi
- Abstract summary: Graph Lottery Ticket (GLT) Hypothesis: There is an extremely sparse backbone for every graph.
We study 8 key metrics of interest that directly influence the performance of graph learning algorithms.
We propose a straightforward and efficient algorithm for finding these GLTs in arbitrary graphs.
- Score: 18.00833762891405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph learning methods help utilize implicit relationships among data items,
thereby reducing training label requirements and improving task performance.
However, determining the optimal graph structure for a particular learning task
remains a challenging research problem.
In this work, we introduce the Graph Lottery Ticket (GLT) Hypothesis - that
there is an extremely sparse backbone for every graph, and that graph learning
algorithms attain comparable performance when trained on that subgraph as on
the full graph. We identify and systematically study 8 key metrics of interest
that directly influence the performance of graph learning algorithms.
Subsequently, we define the notion of a "winning ticket" for graph structure -
an extremely sparse subset of edges that can deliver a robust approximation of
the entire graph's performance. We propose a straightforward and efficient
algorithm for finding these GLTs in arbitrary graphs. Empirically, we observe
that performance of different graph learning algorithms can be matched or even
exceeded on graphs with the average degree as low as 5.
Related papers
- Knowledge Probing for Graph Representation Learning [12.960185655357495]
We propose a novel graph probing framework (GraphProbe) to investigate and interpret whether the family of graph learning methods has encoded different levels of knowledge in graph representation learning.
Based on the intrinsic properties of graphs, we design three probes to systematically investigate the graph representation learning process from different perspectives.
We construct a thorough evaluation benchmark with nine representative graph learning methods from random walk based approaches, basic graph neural networks and self-supervised graph methods, and probe them on six benchmark datasets for node classification, link prediction and graph classification.
arXiv Detail & Related papers (2024-08-07T16:27:45Z) - Learning Attributed Graphlets: Predictive Graph Mining by Graphlets with
Trainable Attribute [4.14034448023832]
This paper proposes an interpretable classification algorithm for attributed graph data, called LAGRA (Learning Attributed GRAphlets)
LAGRA learns importance weights for small attributed subgraphs, called attributed graphlets (AGs), while simultaneously optimizing their attribute vectors.
We empirically demonstrate that LAGRA has superior or comparable prediction performance to the standard existing algorithms.
arXiv Detail & Related papers (2024-02-10T12:10:13Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Synthetic Graph Generation to Benchmark Graph Learning [7.914804101579097]
Graph learning algorithms have attained state-of-the-art performance on many graph analysis tasks.
One reason is due to the very small number of datasets used in practice to benchmark the performance of graph learning algorithms.
We propose to generate synthetic graphs, and study the behaviour of graph learning algorithms in a controlled scenario.
arXiv Detail & Related papers (2022-04-04T10:48:32Z) - Graph Information Bottleneck for Subgraph Recognition [103.37499715761784]
We propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning.
Under this framework, one can recognize the maximally informative yet compressive subgraph, named IB-subgraph.
We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graph classification, graph interpretation and graph denoising.
arXiv Detail & Related papers (2020-10-12T09:32:20Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Faster Graph Embeddings via Coarsening [25.37181684580123]
Graph embeddings are a ubiquitous tool for machine learning tasks, such as node classification and link prediction, on graph-structured data.
computing the embeddings for large-scale graphs is prohibitively inefficient even if we are interested only in a small subset of relevant vertices.
We present an efficient graph coarsening approach, based on Schur complements, for computing the embedding of the relevant vertices.
arXiv Detail & Related papers (2020-07-06T15:22:25Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z) - Learning Product Graphs Underlying Smooth Graph Signals [15.023662220197242]
This paper devises a method to learn structured graphs from data that are given in the form of product graphs.
To this end, first the graph learning problem is posed as a linear program, which (on average) outperforms the state-of-the-art graph learning algorithms.
arXiv Detail & Related papers (2020-02-26T03:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.