Learning Robust Representation through Graph Adversarial Contrastive
Learning
- URL: http://arxiv.org/abs/2201.13025v1
- Date: Mon, 31 Jan 2022 07:07:51 GMT
- Title: Learning Robust Representation through Graph Adversarial Contrastive
Learning
- Authors: Jiayan Guo, Shangyang Li, Yue Zhao, Yan Zhang
- Abstract summary: Existing studies show that node representations generated by graph neural networks (GNNs) are vulnerable to adversarial attacks.
We propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised learning.
- Score: 6.332560610460623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing studies show that node representations generated by graph neural
networks (GNNs) are vulnerable to adversarial attacks, such as unnoticeable
perturbations of adjacent matrix and node features. Thus, it is requisite to
learn robust representations in graph neural networks. To improve the
robustness of graph representation learning, we propose a novel Graph
Adversarial Contrastive Learning framework (GraphACL) by introducing
adversarial augmentations into graph self-supervised learning. In this
framework, we maximize the mutual information between local and global
representations of a perturbed graph and its adversarial augmentations, where
the adversarial graphs can be generated in either supervised or unsupervised
approaches. Based on the Information Bottleneck Principle, we theoretically
prove that our method could obtain a much tighter bound, thus improving the
robustness of graph representation learning. Empirically, we evaluate several
methods on a range of node classification benchmarks and the results
demonstrate GraphACL could achieve comparable accuracy over previous supervised
methods.
Related papers
- Self-Supervised Conditional Distribution Learning on Graphs [15.730933577970687]
We present an end-to-end graph representation learning model to align the conditional distributions of weakly and strongly augmented features over the original features.
This alignment effectively reduces the risk of disrupting intrinsic semantic information through graph-structured data augmentation.
arXiv Detail & Related papers (2024-11-20T07:26:36Z) - Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Self-supervised Consensus Representation Learning for Attributed Graph [15.729417511103602]
We introduce self-supervised learning mechanism to graph representation learning.
We propose a novel Self-supervised Consensus Representation Learning framework.
Our proposed SCRL method treats graph from two perspectives: topology graph and feature graph.
arXiv Detail & Related papers (2021-08-10T07:53:09Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Unsupervised Hierarchical Graph Representation Learning by Mutual
Information Maximization [8.14036521415919]
We present an unsupervised graph representation learning method, Unsupervised Hierarchical Graph Representation (UHGR)
Our method focuses on maximizing mutual information between "local" and high-level "global" representations.
The results show that the proposed method achieves comparable results to state-of-the-art supervised methods on several benchmarks.
arXiv Detail & Related papers (2020-03-18T18:21:48Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.