Graph Convolutional Neural Networks with Diverse Negative Samples via
Decomposed Determinant Point Processes
- URL: http://arxiv.org/abs/2212.02055v3
- Date: Wed, 6 Sep 2023 06:46:13 GMT
- Title: Graph Convolutional Neural Networks with Diverse Negative Samples via
Decomposed Determinant Point Processes
- Authors: Wei Duan, Junyu Xuan, Maoying Qiao, Jie Lu
- Abstract summary: Graph convolutional networks (GCNs) have achieved great success in graph representation learning.
In this paper, we use quality-diversity decomposition in determinant point processes to obtain diverse negative samples.
We propose a new shortest-path-base method to improve computational efficiency.
- Score: 21.792376993468064
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph convolutional networks (GCNs) have achieved great success in graph
representation learning by extracting high-level features from nodes and their
topology. Since GCNs generally follow a message-passing mechanism, each node
aggregates information from its first-order neighbour to update its
representation. As a result, the representations of nodes with edges between
them should be positively correlated and thus can be considered positive
samples. However, there are more non-neighbour nodes in the whole graph, which
provide diverse and useful information for the representation update. Two
non-adjacent nodes usually have different representations, which can be seen as
negative samples. Besides the node representations, the structural information
of the graph is also crucial for learning. In this paper, we used
quality-diversity decomposition in determinant point processes (DPP) to obtain
diverse negative samples. When defining a distribution on diverse subsets of
all non-neighbouring nodes, we incorporate both graph structure information and
node representations. Since the DPP sampling process requires matrix eigenvalue
decomposition, we propose a new shortest-path-base method to improve
computational efficiency. Finally, we incorporate the obtained negative samples
into the graph convolution operation. The ideas are evaluated empirically in
experiments on node classification tasks. These experiments show that the newly
proposed methods not only improve the overall performance of standard
representation learning but also significantly alleviate over-smoothing
problems.
Related papers
- SF-GNN: Self Filter for Message Lossless Propagation in Deep Graph Neural Network [38.669815079957566]
Graph Neural Network (GNN) with the main idea of encoding graph structure information of graphs by propagation and aggregation has developed rapidly.
It achieved excellent performance in representation learning of multiple types of graphs such as homogeneous graphs, heterogeneous graphs, and more complex graphs like knowledge graphs.
For the phenomenon of performance degradation in deep GNNs, we propose a new perspective.
arXiv Detail & Related papers (2024-07-03T02:40:39Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - STERLING: Synergistic Representation Learning on Bipartite Graphs [78.86064828220613]
A fundamental challenge of bipartite graph representation learning is how to extract node embeddings.
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
We introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs.
arXiv Detail & Related papers (2023-01-25T03:21:42Z) - Learning from the Dark: Boosting Graph Convolutional Neural Networks
with Diverse Negative Samples [19.588559820438718]
Graphs have a large, dark, all-but forgotten world in which we find the non-neighbouring nodes (negative samples)
We show that this great dark world holds a substantial amount of information that might be useful for representation learning.
Our overall idea is to select appropriate negative samples for each node and incorporate the negative information contained in these samples into the representation updates.
arXiv Detail & Related papers (2022-10-03T06:14:21Z) - Node Representation Learning in Graph via Node-to-Neighbourhood Mutual
Information Maximization [27.701736055800314]
Key towards learning informative node representations in graphs lies in how to gain contextual information from the neighbourhood.
We present a self-supervised node representation learning strategy via directly maximizing the mutual information between the hidden representations of nodes and their neighbourhood.
Our framework is optimized via a surrogate contrastive loss, where the positive selection underpins the quality and efficiency of representation learning.
arXiv Detail & Related papers (2022-03-23T08:21:10Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - CatGCN: Graph Convolutional Networks with Categorical Node Features [99.555850712725]
CatGCN is tailored for graph learning when the node features are categorical.
We train CatGCN in an end-to-end fashion and demonstrate it on semi-supervised node classification.
arXiv Detail & Related papers (2020-09-11T09:25:17Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.