A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets
Spiking Neural Networks
- URL: http://arxiv.org/abs/2305.19306v2
- Date: Mon, 19 Feb 2024 14:33:06 GMT
- Title: A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets
Spiking Neural Networks
- Authors: Jintang Li, Huizhe Zhang, Ruofan Wu, Zulun Zhu, Baokun Wang, Changhua
Meng, Zibin Zheng, Liang Chen
- Abstract summary: SpikeGCL is a novel framework to learn binarized 1-bit representations for graphs.
We provide theoretical guarantees to demonstrate that SpikeGCL has comparable with its full-precision counterparts.
- Score: 35.35462459134551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While contrastive self-supervised learning has become the de-facto learning
paradigm for graph neural networks, the pursuit of higher task accuracy
requires a larger hidden dimensionality to learn informative and discriminative
full-precision representations, raising concerns about computation, memory
footprint, and energy consumption burden (largely overlooked) for real-world
applications. This work explores a promising direction for graph contrastive
learning (GCL) with spiking neural networks (SNNs), which leverage sparse and
binary characteristics to learn more biologically plausible and compact
representations. We propose SpikeGCL, a novel GCL framework to learn binarized
1-bit representations for graphs, making balanced trade-offs between efficiency
and performance. We provide theoretical guarantees to demonstrate that SpikeGCL
has comparable expressiveness with its full-precision counterparts.
Experimental results demonstrate that, with nearly 32x representation storage
compression, SpikeGCL is either comparable to or outperforms many fancy
state-of-the-art supervised and self-supervised methods across several graph
benchmarks.
Related papers
- GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning [0.0]
Graph representation learning has emerged as a powerful tool for preserving graph topology when mapping nodes to vector representations.
Current graph neural network models face the challenge of requiring extensive labeled data.
We propose Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning.
arXiv Detail & Related papers (2024-09-12T03:09:05Z) - Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Learning Robust Representation through Graph Adversarial Contrastive
Learning [6.332560610460623]
Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
arXiv Detail & Related papers (2022-01-31T07:07:51Z) - Adversarial Graph Augmentation to Improve Graph Contrastive Learning [21.54343383921459]
We propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training.
We experimentally validate AD-GCL by comparing with the state-of-the-art GCL methods and achieve performance gains of up-to $14%$ in unsupervised, $6%$ in transfer, and $3%$ in semi-supervised learning settings.
arXiv Detail & Related papers (2021-06-10T15:34:26Z) - Graph Barlow Twins: A self-supervised representation learning framework
for graphs [25.546290138565393]
We propose a framework for self-supervised graph representation learning - Graph Barlow Twins.
It utilizes a cross-correlation-based loss function instead of negative samples.
We show that our method achieves as competitive results as the best self-supervised methods and fully supervised ones.
arXiv Detail & Related papers (2021-06-04T13:10:51Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.