Spatial Graph Coarsening: Weather and Weekday Prediction with London's
Bike-Sharing Service using GNN
- URL: http://arxiv.org/abs/2308.16122v1
- Date: Wed, 30 Aug 2023 16:21:02 GMT
- Title: Spatial Graph Coarsening: Weather and Weekday Prediction with London's
Bike-Sharing Service using GNN
- Authors: Yuta Sato, Pak Hei Lam, Shruti Gupta, Fareesah Hussain
- Abstract summary: This study introduced the use of Graph Neural Network (GNN) for predicting the weather and weekday of a day in London.
With the node features of land-use characteristics and number of households around the bike stations, our proposed models outperformed the baseline model in cross-entropy loss and accuracy of the validation dataset.
- Score: 0.40964539027092917
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study introduced the use of Graph Neural Network (GNN) for predicting
the weather and weekday of a day in London, from the dataset of Santander
Cycles bike-sharing system as a graph classification task. The proposed GNN
models newly introduced (i) a concatenation operator of graph features with
trained node embeddings and (ii) a graph coarsening operator based on
geographical contiguity, namely "Spatial Graph Coarsening". With the node
features of land-use characteristics and number of households around the bike
stations and graph features of temperatures in the city, our proposed models
outperformed the baseline model in cross-entropy loss and accuracy of the
validation dataset.
Related papers
- Revisiting Neighborhood Aggregation in Graph Neural Networks for Node Classification using Statistical Signal Processing [4.184419714263417]
We reevaluating the concept of neighborhood aggregation, which is a fundamental component in graph neural networks (GNNs)
Our analysis reveals conceptual flaws within certain benchmark GNN models when operating under the assumption of edge-independent node labels.
arXiv Detail & Related papers (2024-07-21T22:37:24Z) - Tackling Oversmoothing in GNN via Graph Sparsification: A Truss-based Approach [1.4854797901022863]
We propose a novel and flexible truss-based graph sparsification model that prunes edges from dense regions of the graph.
We then utilize our sparsification model in the state-of-the-art baseline GNNs and pooling models, such as GIN, SAGPool, GMT, DiffPool, MinCutPool, HGP-SL, DMonPool, and AdamGNN.
arXiv Detail & Related papers (2024-07-16T17:21:36Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Neighborhood Random Walk Graph Sampling for Regularized Bayesian Graph
Convolutional Neural Networks [0.6236890292833384]
In this paper, we propose a novel algorithm called Bayesian Graph Convolutional Network using Neighborhood Random Walk Sampling (BGCN-NRWS)
BGCN-NRWS uses a Markov Chain Monte Carlo (MCMC) based graph sampling algorithm utilizing graph structure, reduces overfitting by using a variational inference layer, and yields consistently competitive classification results compared to the state-of-the-art in semi-supervised node classification.
arXiv Detail & Related papers (2021-12-14T20:58:27Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Customized Graph Neural Networks [38.30640892828196]
Graph Neural Networks (GNNs) have greatly advanced the task of graph classification.
We propose a novel customized graph neural network framework, i.e., Customized-GNN.
The proposed framework is very general that can be applied to numerous existing graph neural network models.
arXiv Detail & Related papers (2020-05-22T05:22:24Z) - Structural Temporal Graph Neural Networks for Anomaly Detection in
Dynamic Graphs [54.13919050090926]
We propose an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs.
In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph.
Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection.
arXiv Detail & Related papers (2020-05-15T09:17:08Z) - GraphLIME: Local Interpretable Model Explanations for Graph Neural
Networks [45.824642013383944]
Graph neural networks (GNN) were shown to be successful in effectively representing graph structured data.
We propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso.
arXiv Detail & Related papers (2020-01-17T09:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.