Hyper-GST: Predict Metro Passenger Flow Incorporating GraphSAGE,
Hypergraph, Social-meaningful Edge Weights and Temporal Exploitation
- URL: http://arxiv.org/abs/2211.04988v1
- Date: Wed, 9 Nov 2022 16:04:45 GMT
- Title: Hyper-GST: Predict Metro Passenger Flow Incorporating GraphSAGE,
Hypergraph, Social-meaningful Edge Weights and Temporal Exploitation
- Authors: Yuyang Miao, Yao Xu, Danilo Mandic
- Abstract summary: Graph-based deep learning algorithms could utilise the graph structure but raise a few challenges.
This study proposes a model based on GraphSAGE with an edge weights learner applied.
Hypergraph and temporal exploitation modules are also constructed as add-ons for better performance.
- Score: 4.698632626407558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting metro passenger flow precisely is of great importance for dynamic
traffic planning. Deep learning algorithms have been widely applied due to
their robust performance in modelling non-linear systems. However, traditional
deep learning algorithms completely discard the inherent graph structure within
the metro system. Graph-based deep learning algorithms could utilise the graph
structure but raise a few challenges, such as how to determine the weights of
the edges and the shallow receptive field caused by the over-smoothing issue.
To further improve these challenges, this study proposes a model based on
GraphSAGE with an edge weights learner applied. The edge weights learner
utilises socially meaningful features to generate edge weights. Hypergraph and
temporal exploitation modules are also constructed as add-ons for better
performance. A comparison study is conducted on the proposed algorithm and
other state-of-art graph neural networks, where the proposed algorithm could
improve the performance.
Related papers
- Layer-wise training for self-supervised learning on graphs [0.0]
End-to-end training of graph neural networks (GNN) on large graphs presents several memory and computational challenges.
We propose Layer-wise Regularized Graph Infomax, an algorithm to train GNNs layer by layer in a self-supervised manner.
arXiv Detail & Related papers (2023-09-04T10:23:39Z) - STG4Traffic: A Survey and Benchmark of Spatial-Temporal Graph Neural Networks for Traffic Prediction [9.467593700532401]
This paper provides a systematic review of graph learning strategies and commonly used graph convolution algorithms.
We then conduct a comprehensive analysis of the strengths and weaknesses of recently proposed spatial-temporal graph network models.
We build a study called STG4Traffic using the deep learning framework PyTorch to establish a standardized and scalable benchmark on two types of traffic datasets.
arXiv Detail & Related papers (2023-07-02T06:56:52Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Understanding Coarsening for Embedding Large-Scale Graphs [3.6739949215165164]
Proper analysis of graphs with Machine Learning (ML) algorithms has the potential to yield far-reaching insights into many areas of research and industry.
The irregular structure of graph data constitutes an obstacle for running ML tasks on graphs.
We analyze the impact of the coarsening quality on the embedding performance both in terms of speed and accuracy.
arXiv Detail & Related papers (2020-09-10T15:06:33Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z) - SIGN: Scalable Inception Graph Neural Networks [4.5158585619109495]
We propose a new, efficient and scalable graph deep learning architecture that sidesteps the need for graph sampling.
Our architecture allows using different local graph operators to best suit the task at hand.
We obtain state-of-the-art results on ogbn-papers100M, the largest public graph dataset, with over 110 million nodes and 1.5 billion edges.
arXiv Detail & Related papers (2020-04-23T14:46:10Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.