Self-Contrastive Graph Diffusion Network
- URL: http://arxiv.org/abs/2307.14613v1
- Date: Thu, 27 Jul 2023 04:00:23 GMT
- Title: Self-Contrastive Graph Diffusion Network
- Authors: Yixian Ma, Kun Zhan
- Abstract summary: We propose a novel framework called the Self-Contrastive Graph Diffusion Network (SCGDN)
Our framework consists of two main components: the Attentional Module (AttM) and the Diffusion Module (DiFM)
Unlike existing methodologies, SCGDN is an augmentation-free approach that avoids "sampling bias" and semantic drift.
- Score: 1.14219428942199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Augmentation techniques and sampling strategies are crucial in contrastive
learning, but in most existing works, augmentation techniques require careful
design, and their sampling strategies can only capture a small amount of
intrinsic supervision information. Additionally, the existing methods require
complex designs to obtain two different representations of the data. To
overcome these limitations, we propose a novel framework called the
Self-Contrastive Graph Diffusion Network (SCGDN). Our framework consists of two
main components: the Attentional Module (AttM) and the Diffusion Module (DiFM).
AttM aggregates higher-order structure and feature information to get an
excellent embedding, while DiFM balances the state of each node in the graph
through Laplacian diffusion learning and allows the cooperative evolution of
adjacency and feature information in the graph. Unlike existing methodologies,
SCGDN is an augmentation-free approach that avoids "sampling bias" and semantic
drift, without the need for pre-training. We conduct a high-quality sampling of
samples based on structure and feature information. If two nodes are neighbors,
they are considered positive samples of each other. If two disconnected nodes
are also unrelated on $k$NN graph, they are considered negative samples for
each other. The contrastive objective reasonably uses our proposed sampling
strategies, and the redundancy reduction term minimizes redundant information
in the embedding and can well retain more discriminative information. In this
novel framework, the graph self-contrastive learning paradigm gives expression
to a powerful force. SCGDN effectively balances between preserving high-order
structure information and avoiding overfitting. The results manifest that SCGDN
can consistently generate outperformance over both the contrastive methods and
the classical methods.
Related papers
- Generative-Enhanced Heterogeneous Graph Contrastive Learning [11.118517297006894]
Heterogeneous Graphs (HGs) can effectively model complex relationships in the real world by multi-type nodes and edges.
In recent years, inspired by self-supervised learning, contrastive Heterogeneous Graphs Neural Networks (HGNNs) have shown great potential by utilizing data augmentation and contrastive discriminators for downstream tasks.
We propose a novel Generative-Enhanced Heterogeneous Graph Contrastive Learning (GHGCL)
arXiv Detail & Related papers (2024-04-03T15:31:18Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Hierarchical Contrastive Learning Enhanced Heterogeneous Graph Neural
Network [59.860534520941485]
Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN)
Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels.
In this paper, we study the problem of self-supervised HGNNs and propose a novel co-contrastive learning mechanism for HGNNs, named HeCo.
arXiv Detail & Related papers (2023-04-24T16:17:21Z) - Efficient Relation-aware Neighborhood Aggregation in Graph Neural Networks via Tensor Decomposition [4.041834517339835]
We propose a novel knowledge graph that incorporates tensor decomposition within the aggregation function of Graph Conalvolution Network (R-GCN)
Our model enhances the representation of neighboring entities by employing projection matrices of a low-rank tensor defined by relation types.
We adopt a training strategy inspired by contrastive learning to relieve the training limitation of the 1-k-k encoder method inherent in handling vast graphs.
arXiv Detail & Related papers (2022-12-11T19:07:34Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Deep Graph Clustering via Dual Correlation Reduction [37.973072977988494]
We propose a novel self-supervised deep graph clustering method termed Dual Correlation Reduction Network (DCRN)
In our method, we first design a siamese network to encode samples. Then by forcing the cross-view sample correlation matrix and cross-view feature correlation matrix to approximate two identity matrices, respectively, we reduce the information correlation in the dual-level.
In order to alleviate representation collapse caused by over-smoothing in GCN, we introduce a propagation regularization term to enable the network to gain long-distance information.
arXiv Detail & Related papers (2021-12-29T04:05:38Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - Attentive WaveBlock: Complementarity-enhanced Mutual Networks for
Unsupervised Domain Adaptation in Person Re-identification and Beyond [97.25179345878443]
This paper proposes a novel light-weight module, the Attentive WaveBlock (AWB)
AWB can be integrated into the dual networks of mutual learning to enhance the complementarity and further depress noise in the pseudo-labels.
Experiments demonstrate that the proposed method achieves state-of-the-art performance with significant improvements on multiple UDA person re-identification tasks.
arXiv Detail & Related papers (2020-06-11T15:40:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.