Bayesian Robust Graph Contrastive Learning
- URL: http://arxiv.org/abs/2205.14109v1
- Date: Fri, 27 May 2022 17:21:17 GMT
- Title: Bayesian Robust Graph Contrastive Learning
- Authors: Yancheng Wang, Yingzhen Yang
- Abstract summary: We propose a novel and robust method, which trains a GNN encoder to learn robust node representations.
Experiments on public and large-scale benchmarks demonstrate the superior performance of BRGCL and the robustness of the learned node representations.
- Score: 4.6761071607574545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have been widely used to learn node
representations and with outstanding performance on various tasks such as node
classification. However, noise, which inevitably exists in real-world graph
data, would considerably degrade the performance of GNNs as the noise is easily
propagated via the graph structure. In this work, we propose a novel and robust
method, Bayesian Robust Graph Contrastive Learning (BRGCL), which trains a GNN
encoder to learn robust node representations. The BRGCL encoder is a completely
unsupervised encoder. Two steps are iteratively executed at each epoch of
training the BRGCL encoder: (1) estimating confident nodes and computing robust
cluster prototypes of node representations through a novel Bayesian
nonparametric method; (2) prototypical contrastive learning between the node
representations and the robust cluster prototypes. Experiments on public and
large-scale benchmarks demonstrate the superior performance of BRGCL and the
robustness of the learned node representations. The code of BRGCL is available
at \url{https://github.com/BRGCL-code/BRGCL-code}.
Related papers
- Low-Rank Graph Contrastive Learning for Node Classification [10.520101507424577]
Graph Neural Networks (GNNs) have been widely used to learn node representations and with outstanding performance on various tasks such as node classification.
We propose a novel and robust GNN encoder, Low-Rank Graph Contrastive Learning (LR-GCL)
arXiv Detail & Related papers (2024-02-14T22:15:37Z) - UniG-Encoder: A Universal Feature Encoder for Graph and Hypergraph Node
Classification [6.977634174845066]
A universal feature encoder for both graph and hypergraph representation learning is designed, called UniG-Encoder.
The architecture starts with a forward transformation of the topological relationships of connected nodes into edge or hyperedge features.
The encoded node embeddings are then derived from the reversed transformation, described by the transpose of the projection matrix.
arXiv Detail & Related papers (2023-08-03T09:32:50Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z) - Inferential SIR-GN: Scalable Graph Representation Learning [0.4699313647907615]
Graph representation learning methods generate numerical vector representations for the nodes in a network.
In this work, we propose Inferential SIR-GN, a model which is pre-trained on random graphs, then computes node representations rapidly.
We demonstrate that the model is able to capture node's structural role information, and show excellent performance at node and graph classification tasks, on unseen networks.
arXiv Detail & Related papers (2021-11-08T20:56:37Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Bi-GCN: Binary Graph Convolutional Network [57.733849700089955]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node features.
Our Bi-GCN can reduce the memory consumption by an average of 30x for both the network parameters and input data, and accelerate the inference speed by an average of 47x.
arXiv Detail & Related papers (2020-10-15T07:26:23Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.