Graph-level Protein Representation Learning by Structure Knowledge
Refinement
- URL: http://arxiv.org/abs/2401.02713v1
- Date: Fri, 5 Jan 2024 09:05:33 GMT
- Title: Graph-level Protein Representation Learning by Structure Knowledge
Refinement
- Authors: Ge Wang, Zelin Zang, Jiangbin Zheng, Jun Xia, Stan Z. Li
- Abstract summary: This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
- Score: 50.775264276189695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on learning representation on the whole graph level in an
unsupervised manner. Learning graph-level representation plays an important
role in a variety of real-world issues such as molecule property prediction,
protein structure feature extraction, and social network analysis. The
mainstream method is utilizing contrastive learning to facilitate graph feature
extraction, known as Graph Contrastive Learning (GCL). GCL, although effective,
suffers from some complications in contrastive learning, such as the effect of
false negative pairs. Moreover, augmentation strategies in GCL are weakly
adaptive to diverse graph datasets. Motivated by these problems, we propose a
novel framework called Structure Knowledge Refinement (SKR) which uses data
structure to determine the probability of whether a pair is positive or
negative. Meanwhile, we propose an augmentation strategy that naturally
preserves the semantic meaning of the original data and is compatible with our
SKR framework. Furthermore, we illustrate the effectiveness of our SKR
framework through intuition and experiments. The experimental results on the
tasks of graph-level classification demonstrate that our SKR framework is
superior to most state-of-the-art baselines.
Related papers
- Efficient and Robust Continual Graph Learning for Graph Classification in Biology [4.1259781599165635]
We present Perturbed and Sparsified Continual Graph Learning (PSCGL), a robust and efficient continual graph learning framework for graph data classification.
PSCGL not only retains knowledge across tasks but also enhances the efficiency and robustness of graph classification models in biology.
arXiv Detail & Related papers (2024-11-18T15:47:37Z) - Adversarial Curriculum Graph Contrastive Learning with Pair-wise
Augmentation [35.875976206333185]
ACGCL capitalizes on the merits of pair-wise augmentation to engender graph-level positive and negative samples with controllable similarity.
Within the ACGCL framework, we have devised a novel adversarial curriculum training methodology.
A comprehensive assessment of ACGCL is conducted through extensive experiments on six well-known benchmark datasets.
arXiv Detail & Related papers (2024-02-16T06:17:50Z) - On the Adversarial Robustness of Graph Contrastive Learning Methods [9.675856264585278]
We introduce a comprehensive evaluation robustness protocol tailored to assess the robustness of graph contrastive learning (GCL) models.
We subject these models to adaptive adversarial attacks targeting the graph structure, specifically in the evasion scenario.
With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.
arXiv Detail & Related papers (2023-11-29T17:59:18Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Graph Structure Learning with Variational Information Bottleneck [70.62851953251253]
We propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL.
VIB-GSL learns an informative and compressive graph structure to distill the actionable information for specific downstream tasks.
arXiv Detail & Related papers (2021-12-16T14:22:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.