CCGL: Contrastive Cascade Graph Learning
- URL: http://arxiv.org/abs/2107.12576v1
- Date: Tue, 27 Jul 2021 03:37:50 GMT
- Title: CCGL: Contrastive Cascade Graph Learning
- Authors: Xovee Xu, Fan Zhou, Kunpeng Zhang, Siyuan Liu
- Abstract summary: Contrastive Cascade Graph Learning (CCGL) is a novel framework for cascade graph representation learning.
CCGL learns a generic model for graph cascade tasks via self-supervised contrastive pre-training.
It learns a task-specific cascade model via fine-tuning using labeled data.
- Score: 25.43615673424728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised learning, while prevalent for information cascade modeling, often
requires abundant labeled data in training, and the trained model is not easy
to generalize across tasks and datasets. Semi-supervised learning facilitates
unlabeled data for cascade understanding in pre-training. It often learns
fine-grained feature-level representations, which can easily result in
overfitting for downstream tasks. Recently, contrastive self-supervised
learning is designed to alleviate these two fundamental issues in linguistic
and visual tasks. However, its direct applicability for cascade modeling,
especially graph cascade related tasks, remains underexplored. In this work, we
present Contrastive Cascade Graph Learning (CCGL), a novel framework for
cascade graph representation learning in a contrastive, self-supervised, and
task-agnostic way. In particular, CCGL first designs an effective data
augmentation strategy to capture variation and uncertainty. Second, it learns a
generic model for graph cascade tasks via self-supervised contrastive
pre-training using both unlabeled and labeled data. Third, CCGL learns a
task-specific cascade model via fine-tuning using labeled data. Finally, to
make the model transferable across datasets and cascade applications, CCGL
further enhances the model via distillation using a teacher-student
architecture. We demonstrate that CCGL significantly outperforms its supervised
and semi-supervised counterpartsfor several downstream tasks.
Related papers
- TCGU: Data-centric Graph Unlearning based on Transferable Condensation [36.670771080732486]
Transferable Condensation Graph Unlearning (TCGU) is a data-centric solution to zero-glance graph unlearning.
We show that TCGU can achieve superior performance in terms of model utility, unlearning efficiency, and unlearning efficacy than existing GU methods.
arXiv Detail & Related papers (2024-10-09T02:14:40Z) - GraphFM: A Comprehensive Benchmark for Graph Foundation Model [33.157367455390144]
Foundation Models (FMs) serve as a general class for the development of artificial intelligence systems.
Despite extensive research into self-supervised learning as the cornerstone of FMs, several outstanding issues persist.
The extent of generalization capability on downstream tasks remains unclear.
It is unknown how effectively these models can scale to large datasets.
arXiv Detail & Related papers (2024-06-12T15:10:44Z) - Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs [36.749959232724514]
ZeroG is a new framework tailored to enable cross-dataset generalization.
We address the inherent challenges such as feature misalignment, mismatched label spaces, and negative transfer.
We propose a prompt-based subgraph sampling module that enriches the semantic information and structure information of extracted subgraphs.
arXiv Detail & Related papers (2024-02-17T09:52:43Z) - PUMA: Efficient Continual Graph Learning for Node Classification with Graph Condensation [49.00940417190911]
Existing graph representation learning models encounter a catastrophic problem when learning with newly incoming graphs.
In this paper, we propose a PUdo-label guided Memory bAnkrogation (PUMA) framework to enhance its efficiency and effectiveness.
arXiv Detail & Related papers (2023-12-22T05:09:58Z) - Contrastive Graph Few-Shot Learning [67.01464711379187]
We propose a Contrastive Graph Few-shot Learning framework (CGFL) for graph mining tasks.
CGFL learns data representation in a self-supervised manner, thus mitigating the distribution shift impact for better generalization.
Comprehensive experiments demonstrate that CGFL outperforms state-of-the-art baselines on several graph mining tasks.
arXiv Detail & Related papers (2022-09-30T20:40:23Z) - Data-Free Adversarial Knowledge Distillation for Graph Neural Networks [62.71646916191515]
We propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN)
To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model.
Our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task.
arXiv Detail & Related papers (2022-05-08T08:19:40Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - A Deep Latent Space Model for Graph Representation Learning [10.914558012458425]
We propose a Deep Latent Space Model (DLSM) for directed graphs to incorporate the traditional latent variable based generative model into deep learning frameworks.
Our proposed model consists of a graph convolutional network (GCN) encoder and a decoder, which are layer-wise connected by a hierarchical variational auto-encoder architecture.
Experiments on real-world datasets show that the proposed model achieves the state-of-the-art performances on both link prediction and community detection tasks.
arXiv Detail & Related papers (2021-06-22T12:41:19Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.