Contrastive Graph Few-Shot Learning
- URL: http://arxiv.org/abs/2210.00084v1
- Date: Fri, 30 Sep 2022 20:40:23 GMT
- Title: Contrastive Graph Few-Shot Learning
- Authors: Chunhui Zhang, Hongfu Liu, Jundong Li, Yanfang Ye, Chuxu Zhang
- Abstract summary: We propose a Contrastive Graph Few-shot Learning framework (CGFL) for graph mining tasks.
CGFL learns data representation in a self-supervised manner, thus mitigating the distribution shift impact for better generalization.
Comprehensive experiments demonstrate that CGFL outperforms state-of-the-art baselines on several graph mining tasks.
- Score: 67.01464711379187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prevailing deep graph learning models often suffer from label sparsity issue.
Although many graph few-shot learning (GFL) methods have been developed to
avoid performance degradation in face of limited annotated data, they
excessively rely on labeled data, where the distribution shift in the test
phase might result in impaired generalization ability. Additionally, they lack
a general purpose as their designs are coupled with task or data-specific
characteristics. To this end, we propose a general and effective Contrastive
Graph Few-shot Learning framework (CGFL). CGFL leverages a self-distilled
contrastive learning procedure to boost GFL. Specifically, our model firstly
pre-trains a graph encoder with contrastive learning using unlabeled data.
Later, the trained encoder is frozen as a teacher model to distill a student
model with a contrastive loss. The distilled model is finally fed to GFL. CGFL
learns data representation in a self-supervised manner, thus mitigating the
distribution shift impact for better generalization and making model task and
data-independent for a general graph mining purpose. Furthermore, we introduce
an information-based method to quantitatively measure the capability of CGFL.
Comprehensive experiments demonstrate that CGFL outperforms state-of-the-art
baselines on several graph mining tasks in the few-shot scenario. We also
provide quantitative measurement of CGFL's success.
Related papers
- GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
Neural Networks [69.97213941893351]
The emergence of Graph Neural Networks (GNNs) in graph data analysis has raised critical concerns about data misuse during model training.
Existing methodologies address either data misuse detection or mitigation, and are primarily designed for local GNN models.
This paper introduces a pioneering approach called GraphGuard, to tackle these challenges.
arXiv Detail & Related papers (2023-12-13T02:59:37Z) - Self-Pro: A Self-Prompt and Tuning Framework for Graph Neural Networks [10.794305560114903]
Self-Prompt is a prompting framework for graphs based on the model and data itself.
We introduce asymmetric graph contrastive learning for pretext to address heterophily and align the objectives of pretext and downstream tasks.
We conduct extensive experiments on 11 benchmark datasets to demonstrate its superiority.
arXiv Detail & Related papers (2023-10-16T12:58:04Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Features Based Adaptive Augmentation for Graph Contrastive Learning [0.0]
Self-Supervised learning aims to eliminate the need for expensive annotation in graph representation learning.
We introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features.
We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning's benchmark datasets.
arXiv Detail & Related papers (2022-07-05T03:41:20Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - CCGL: Contrastive Cascade Graph Learning [25.43615673424728]
Contrastive Cascade Graph Learning (CCGL) is a novel framework for cascade graph representation learning.
CCGL learns a generic model for graph cascade tasks via self-supervised contrastive pre-training.
It learns a task-specific cascade model via fine-tuning using labeled data.
arXiv Detail & Related papers (2021-07-27T03:37:50Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - GraphFL: A Federated Learning Framework for Semi-Supervised Node
Classification on Graphs [48.13100386338979]
We propose the first FL framework, namely GraphFL, for semi-supervised node classification on graphs.
We propose two GraphFL methods to respectively address the non-IID issue in graph data and handle the tasks with new label domains.
We adopt representative graph neural networks as GraphSSC methods and evaluate GraphFL on multiple graph datasets.
arXiv Detail & Related papers (2020-12-08T03:13:29Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.