Contrastive Multi-View Representation Learning on Graphs
- URL: http://arxiv.org/abs/2006.05582v1
- Date: Wed, 10 Jun 2020 00:49:15 GMT
- Title: Contrastive Multi-View Representation Learning on Graphs
- Authors: Kaveh Hassani and Amir Hosein Khasahmadi
- Abstract summary: We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs.
We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks.
- Score: 13.401746329218017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a self-supervised approach for learning node and graph level
representations by contrasting structural views of graphs. We show that unlike
visual representation learning, increasing the number of views to more than two
or contrasting multi-scale encodings do not improve performance, and the best
performance is achieved by contrasting encodings from first-order neighbors and
a graph diffusion. We achieve new state-of-the-art results in self-supervised
learning on 8 out of 8 node and graph classification benchmarks under the
linear evaluation protocol. For example, on Cora (node) and Reddit-Binary
(graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which
are 5.5% and 2.4% relative improvements over previous state-of-the-art. When
compared to supervised baselines, our approach outperforms them in 4 out of 8
benchmarks. Source code is released at: https://github.com/kavehhassani/mvgrl
Related papers
- NeCo: Improving DINOv2's spatial representations in 19 GPU hours with Patch Neighbor Consistency [35.768260232640756]
We introduce NeCo: Patch Neighbor Consistency, a novel training loss that enforces patch-level nearest neighbor consistency across a student and teacher model.
Our method leverages a differentiable sorting method applied on top of pretrained representations, such as DINOv2-registers to bootstrap the learning signal.
This dense post-pretraining leads to superior performance across various models and datasets, despite requiring only 19 hours on a single GPU.
arXiv Detail & Related papers (2024-08-20T17:58:59Z) - SimMatchV2: Semi-Supervised Learning with Graph Consistency [53.31681712576555]
We introduce a new semi-supervised learning algorithm - SimMatchV2.
It formulates various consistency regularizations between labeled and unlabeled data from the graph perspective.
SimMatchV2 has been validated on multiple semi-supervised learning benchmarks.
arXiv Detail & Related papers (2023-08-13T05:56:36Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Self-Supervised Graph Learning with Proximity-based Views and Channel
Contrast [4.761137180081091]
Graph neural networks (GNNs) use neighborhood aggregation as a core component that results in feature smoothing among nodes in proximity.
To tackle this problem, we strengthen the graph with two additional graph views, in which nodes are directly linked to those with the most similar features or local structures.
We propose a method that aims to maximize the agreement between representations across generated views and the original graph.
arXiv Detail & Related papers (2021-06-07T15:38:36Z) - With a Little Help from My Friends: Nearest-Neighbor Contrastive
Learning of Visual Representations [87.72779294717267]
Using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification.
We demonstrate empirically that our method is less reliant on complex data augmentations.
arXiv Detail & Related papers (2021-04-29T17:56:08Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.