There is more to graphs than meets the eye: Learning universal features
with self-supervision
- URL: http://arxiv.org/abs/2305.19871v1
- Date: Wed, 31 May 2023 14:08:48 GMT
- Title: There is more to graphs than meets the eye: Learning universal features
with self-supervision
- Authors: Laya Das, Sai Munikoti, Mahantesh Halappanavar
- Abstract summary: We study the problem of learning universal features across multiple graphs through self-supervision.
We adopt a transformer backbone that acts as a universal representation learning module for multiple graphs.
Our experiments reveal that leveraging multiple graphs of the same type -- citation networks -- improves the quality of representations.
- Score: 1.399617112958673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of learning universal features across multiple graphs
through self-supervision. Graph self supervised learning has been shown to
facilitate representation learning, and produce competitive models compared to
supervised baselines. However, existing methods of self-supervision learn
features from one graph, and thus, produce models that are specialized to a
particular graph. We hypothesize that leveraging multiple graphs of the same
type/class can improve the quality of learnt representations in the model by
extracting features that are universal to the class of graphs. We adopt a
transformer backbone that acts as a universal representation learning module
for multiple graphs. We leverage neighborhood aggregation coupled with
graph-specific embedding generator to transform disparate node embeddings from
multiple graphs to a common space for the universal backbone. We learn both
universal and graph-specific parameters in an end-to-end manner. Our
experiments reveal that leveraging multiple graphs of the same type -- citation
networks -- improves the quality of representations and results in better
performance on downstream node classification task compared to self-supervision
with one graph. The results of our study improve the state-of-the-art in graph
self-supervised learning, and bridge the gap between self-supervised and
supervised performance.
Related papers
- Revisiting the Necessity of Graph Learning and Common Graph Benchmarks [2.1125997983972207]
Graph machine learning has enjoyed a meteoric rise in popularity since the introduction of deep learning in graph contexts.
The driving belief is that node features are insufficient for these tasks, so benchmark performance accurately reflects improvements in graph learning.
We show that surprisingly, node features are oftentimes more-than-sufficient for these tasks.
arXiv Detail & Related papers (2024-12-09T03:09:04Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Edge but not Least: Cross-View Graph Pooling [76.71497833616024]
This paper presents a cross-view graph pooling (Co-Pooling) method to better exploit crucial graph structure information.
Through cross-view interaction, edge-view pooling and node-view pooling seamlessly reinforce each other to learn more informative graph-level representations.
arXiv Detail & Related papers (2021-09-24T08:01:23Z) - Generating a Doppelganger Graph: Resembling but Distinct [5.618335078130568]
We propose an approach to generating a doppelganger graph that resembles a given one in many graph properties.
The approach is an orchestration of graph representation learning, generative adversarial networks, and graph realization algorithms.
arXiv Detail & Related papers (2021-01-23T22:08:27Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Machine Learning on Graphs: A Model and Comprehensive Taxonomy [22.73365477040205]
We bridge the gap between graph neural networks, network embedding and graph regularization models.
Specifically, we propose a Graph Decoder Model (GRAPHEDM), which generalizes popular algorithms for semi-supervised learning on graphs.
arXiv Detail & Related papers (2020-05-07T18:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.