There is more to graphs than meets the eye: Learning universal features
with self-supervision
- URL: http://arxiv.org/abs/2305.19871v1
- Date: Wed, 31 May 2023 14:08:48 GMT
- Title: There is more to graphs than meets the eye: Learning universal features
with self-supervision
- Authors: Laya Das, Sai Munikoti, Mahantesh Halappanavar
- Abstract summary: We study the problem of learning universal features across multiple graphs through self-supervision.
We adopt a transformer backbone that acts as a universal representation learning module for multiple graphs.
Our experiments reveal that leveraging multiple graphs of the same type -- citation networks -- improves the quality of representations.
- Score: 1.399617112958673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of learning universal features across multiple graphs
through self-supervision. Graph self supervised learning has been shown to
facilitate representation learning, and produce competitive models compared to
supervised baselines. However, existing methods of self-supervision learn
features from one graph, and thus, produce models that are specialized to a
particular graph. We hypothesize that leveraging multiple graphs of the same
type/class can improve the quality of learnt representations in the model by
extracting features that are universal to the class of graphs. We adopt a
transformer backbone that acts as a universal representation learning module
for multiple graphs. We leverage neighborhood aggregation coupled with
graph-specific embedding generator to transform disparate node embeddings from
multiple graphs to a common space for the universal backbone. We learn both
universal and graph-specific parameters in an end-to-end manner. Our
experiments reveal that leveraging multiple graphs of the same type -- citation
networks -- improves the quality of representations and results in better
performance on downstream node classification task compared to self-supervision
with one graph. The results of our study improve the state-of-the-art in graph
self-supervised learning, and bridge the gap between self-supervised and
supervised performance.
Related papers
- OpenGraph: Towards Open Graph Foundation Models [20.401374302429627]
We develop a general graph foundation model to understand the complex topological patterns present in diverse graph data.
We propose a unified graph tokenizer to adapt our graph model to generalize well on unseen graph data.
We also develop a scalable graph transformer, which effectively captures node-wise dependencies within the global topological context.
arXiv Detail & Related papers (2024-03-02T08:05:03Z) - Isomorphic-Consistent Variational Graph Auto-Encoders for Multi-Level
Graph Representation Learning [9.039193854524763]
We propose the Isomorphic-Consistent VGAE (IsoC-VGAE) for task-agnostic graph representation learning.
We first devise a decoding scheme to provide a theoretical guarantee of keeping the isomorphic consistency.
We then propose the Inverse Graph Neural Network (Inv-GNN) decoder as its intuitive realization.
arXiv Detail & Related papers (2023-12-09T10:16:53Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Group Contrastive Self-Supervised Learning on Graphs [101.45974132613293]
We study self-supervised learning on graphs using contrastive methods.
We argue that contrasting graphs in multiple subspaces enables graph encoders to capture more abundant characteristics.
arXiv Detail & Related papers (2021-07-20T22:09:21Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.