A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets
Spiking Neural Networks
- URL: http://arxiv.org/abs/2305.19306v2
- Date: Mon, 19 Feb 2024 14:33:06 GMT
- Title: A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets
Spiking Neural Networks
- Authors: Jintang Li, Huizhe Zhang, Ruofan Wu, Zulun Zhu, Baokun Wang, Changhua
Meng, Zibin Zheng, Liang Chen
- Abstract summary: SpikeGCL is a novel framework to learn binarized 1-bit representations for graphs.
We provide theoretical guarantees to demonstrate that SpikeGCL has comparable with its full-precision counterparts.
- Score: 35.35462459134551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While contrastive self-supervised learning has become the de-facto learning
paradigm for graph neural networks, the pursuit of higher task accuracy
requires a larger hidden dimensionality to learn informative and discriminative
full-precision representations, raising concerns about computation, memory
footprint, and energy consumption burden (largely overlooked) for real-world
applications. This work explores a promising direction for graph contrastive
learning (GCL) with spiking neural networks (SNNs), which leverage sparse and
binary characteristics to learn more biologically plausible and compact
representations. We propose SpikeGCL, a novel GCL framework to learn binarized
1-bit representations for graphs, making balanced trade-offs between efficiency
and performance. We provide theoretical guarantees to demonstrate that SpikeGCL
has comparable expressiveness with its full-precision counterparts.
Experimental results demonstrate that, with nearly 32x representation storage
compression, SpikeGCL is either comparable to or outperforms many fancy
state-of-the-art supervised and self-supervised methods across several graph
benchmarks.
Related papers
- Graph-level Protein Representation Learning by Structure Knowledge
Refinement [50.775264276189695]
This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
arXiv Detail & Related papers (2024-01-05T09:05:33Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Learning Robust Representation through Graph Adversarial Contrastive
Learning [6.332560610460623]
Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
arXiv Detail & Related papers (2022-01-31T07:07:51Z) - Adversarial Graph Augmentation to Improve Graph Contrastive Learning [21.54343383921459]
We propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training.
We experimentally validate AD-GCL by comparing with the state-of-the-art GCL methods and achieve performance gains of up-to $14%$ in unsupervised, $6%$ in transfer, and $3%$ in semi-supervised learning settings.
arXiv Detail & Related papers (2021-06-10T15:34:26Z) - Graph Barlow Twins: A self-supervised representation learning framework
for graphs [25.546290138565393]
We propose a framework for self-supervised graph representation learning - Graph Barlow Twins.
It utilizes a cross-correlation-based loss function instead of negative samples.
We show that our method achieves as competitive results as the best self-supervised methods and fully supervised ones.
arXiv Detail & Related papers (2021-06-04T13:10:51Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Self-supervised Graph Learning for Recommendation [69.98671289138694]
We explore self-supervised learning on user-item graph for recommendation.
An auxiliary self-supervised task reinforces node representation learning via self-discrimination.
Empirical studies on three benchmark datasets demonstrate the effectiveness of SGL.
arXiv Detail & Related papers (2020-10-21T06:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.