GPS: A Policy-driven Sampling Approach for Graph Representation Learning
- URL: http://arxiv.org/abs/2112.14482v1
- Date: Wed, 29 Dec 2021 09:59:53 GMT
- Title: GPS: A Policy-driven Sampling Approach for Graph Representation Learning
- Authors: Tiehua Zhang, Yuze Liu, Xin Chen, Xiaowei Huang, Feng Zhu, Xi Zheng
- Abstract summary: We propose an adaptive Graph Policy-driven Sampling model (GPS), where the influence of each node in the local neighborhood is realized through the adaptive correlation calculation.
Our proposed model outperforms the existing ones by 3%-8% on several vital benchmarks, achieving state-of-the-art performance in real-world datasets.
- Score: 12.760239169374984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph representation learning has drawn increasing attention in recent years,
especially for learning the low dimensional embedding at both node and graph
level for classification and recommendations tasks. To enable learning the
representation on the large-scale graph data in the real world, numerous
research has focused on developing different sampling strategies to facilitate
the training process. Herein, we propose an adaptive Graph Policy-driven
Sampling model (GPS), where the influence of each node in the local
neighborhood is realized through the adaptive correlation calculation.
Specifically, the selections of the neighbors are guided by an adaptive policy
algorithm, contributing directly to the message aggregation, node embedding
updating, and graph level readout steps. We then conduct comprehensive
experiments against baseline methods on graph classification tasks from various
perspectives. Our proposed model outperforms the existing ones by 3%-8% on
several vital benchmarks, achieving state-of-the-art performance in real-world
datasets.
Related papers
- Graph Learning under Distribution Shifts: A Comprehensive Survey on
Domain Adaptation, Out-of-distribution, and Continual Learning [53.81365215811222]
We provide a review and summary of the latest approaches, strategies, and insights that address distribution shifts within the context of graph learning.
We categorize existing graph learning methods into several essential scenarios, including graph domain adaptation learning, graph out-of-distribution learning, and graph continual learning.
We discuss the potential applications and future directions for graph learning under distribution shifts with a systematic analysis of the current state in this field.
arXiv Detail & Related papers (2024-02-26T07:52:40Z) - Universal Graph Continual Learning [22.010954622073598]
We focus on a universal approach wherein each data point in a task can be a node or a graph, and the task varies from node to graph classification.
We propose a novel method that enables graph neural networks to excel in this universal setting.
arXiv Detail & Related papers (2023-08-27T01:19:19Z) - Deep learning for dynamic graphs: models and benchmarks [16.851689741256912]
Recent progress in research on Deep Graph Networks (DGNs) has led to a maturation of the domain of learning on graphs.
Despite the growth of this research field, there are still important challenges that are yet unsolved.
arXiv Detail & Related papers (2023-07-12T12:02:36Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - A Comprehensive Analytical Survey on Unsupervised and Semi-Supervised
Graph Representation Learning Methods [4.486285347896372]
This survey aims to evaluate all major classes of graph embedding methods.
We organized graph embedding techniques using a taxonomy that includes methods from manual feature engineering, matrix factorization, shallow neural networks, and deep graph convolutional networks.
We designed experiments on top of PyTorch Geometric and DGL libraries and run experiments on different multicore CPU and GPU platforms.
arXiv Detail & Related papers (2021-12-20T07:50:26Z) - Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming [48.99614465020678]
We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
arXiv Detail & Related papers (2021-11-20T22:45:53Z) - Dynamic Graph Modeling of Simultaneous EEG and Eye-tracking Data for
Reading Task Identification [79.41619843969347]
We present a new approach, that we call AdaGTCN, for identifying human reader intent from Electroencephalogram(EEG) and Eye movement(EM) data.
Our method, Adaptive Graph Temporal Convolution Network (AdaGTCN), uses an Adaptive Graph Learning Layer and Deep Neighborhood Graph Convolution Layer.
We compare our approach with several baselines to report an improvement of 6.29% on the ZuCo 2.0 dataset, along with extensive ablation experiments.
arXiv Detail & Related papers (2021-02-21T18:19:49Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Quantifying Challenges in the Application of Graph Representation
Learning [0.0]
We provide an application oriented perspective to a set of popular embedding approaches.
We evaluate their representational power with respect to real-world graph properties.
Our results suggest that "one-to-fit-all" GRL approaches are hard to define in real-world scenarios.
arXiv Detail & Related papers (2020-06-18T03:19:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.