SLAPS: Self-Supervision Improves Structure Learning for Graph Neural
Networks
- URL: http://arxiv.org/abs/2102.05034v1
- Date: Tue, 9 Feb 2021 18:56:01 GMT
- Title: SLAPS: Self-Supervision Improves Structure Learning for Graph Neural
Networks
- Authors: Bahare Fatemi, Layla El Asri, Seyed Mehran Kazemi
- Abstract summary: We propose the Simultaneous Learning of Adjacency and GNN Parameters with Self-supervision, or SLAPS, a method that provides more supervision for inferring a graph structure through self-supervision.
A comprehensive experimental study demonstrates that SLAPS scales to large graphs with hundreds of thousands of nodes and outperforms several models that have been proposed to learn a task-specific graph structure on established benchmarks.
- Score: 14.319159694115655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) work well when the graph structure is provided.
However, this structure may not always be available in real-world applications.
One solution to this problem is to infer a task-specific latent structure and
then apply a GNN to the inferred graph. Unfortunately, the space of possible
graph structures grows super-exponentially with the number of nodes and so the
task-specific supervision may be insufficient for learning both the structure
and the GNN parameters. In this work, we propose the Simultaneous Learning of
Adjacency and GNN Parameters with Self-supervision, or SLAPS, a method that
provides more supervision for inferring a graph structure through
self-supervision. A comprehensive experimental study demonstrates that SLAPS
scales to large graphs with hundreds of thousands of nodes and outperforms
several models that have been proposed to learn a task-specific graph structure
on established benchmarks.
Related papers
- Graph Structure Prompt Learning: A Novel Methodology to Improve Performance of Graph Neural Networks [13.655670509818144]
We propose a novel Graph structure Prompt Learning method (GPL) to enhance the training of Graph networks (GNNs)
GPL employs task-independent graph structure losses to encourage GNNs to learn intrinsic graph characteristics while simultaneously solving downstream tasks.
In experiments on eleven real-world datasets, after being trained by neural prediction, GNNs significantly outperform their original performance on node classification, graph classification, and edge tasks.
arXiv Detail & Related papers (2024-07-16T03:59:18Z) - Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - GraphEdit: Large Language Models for Graph Structure Learning [62.618818029177355]
Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data.
Existing GSL methods heavily depend on explicit graph structural information as supervision signals.
We propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data.
arXiv Detail & Related papers (2024-02-23T08:29:42Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Semantic Graph Neural Network with Multi-measure Learning for
Semi-supervised Classification [5.000404730573809]
Graph Neural Networks (GNNs) have attracted increasing attention in recent years.
Recent studies have shown that GNNs are vulnerable to the complex underlying structure of the graph.
We propose a novel framework for semi-supervised classification.
arXiv Detail & Related papers (2022-12-04T06:17:11Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Pointer Graph Networks [48.44209547013781]
Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront.
Pointer Graph Networks (PGNs) augment sets or graphs with additional inferred edges for improved model generalisation ability.
PGNs allow each node to dynamically point to another node, followed by message passing over these pointers.
arXiv Detail & Related papers (2020-06-11T12:52:31Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.