Multi-Graph Fusion Networks for Urban Region Embedding
- URL: http://arxiv.org/abs/2201.09760v1
- Date: Mon, 24 Jan 2022 15:48:50 GMT
- Title: Multi-Graph Fusion Networks for Urban Region Embedding
- Authors: Shangbin Wu, Xu Yan, Xiaoliang Fan, Shirui Pan, Shichao Zhu, Chuanpan
Zheng, Ming Cheng, Cheng Wang
- Abstract summary: Learning embeddings for urban regions from human mobility data can reveal the functionality of regions, and then enables correlated but distinct tasks such as crime prediction.
We propose multi-graph fusion networks (MGFN) to enable the cross domain prediction tasks.
Experimental results demonstrate that the proposed MGFN outperforms the state-of-the-art methods by up to 12.35%.
- Score: 40.97361959702485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the embeddings for urban regions from human mobility data can reveal
the functionality of regions, and then enables the correlated but distinct
tasks such as crime prediction. Human mobility data contains rich but abundant
information, which yields to the comprehensive region embeddings for cross
domain tasks. In this paper, we propose multi-graph fusion networks (MGFN) to
enable the cross domain prediction tasks. First, we integrate the graphs with
spatio-temporal similarity as mobility patterns through a mobility graph fusion
module. Then, in the mobility pattern joint learning module, we design the
multi-level cross-attention mechanism to learn the comprehensive embeddings
from multiple mobility patterns based on intra-pattern and inter-pattern
messages. Finally, we conduct extensive experiments on real-world urban
datasets. Experimental results demonstrate that the proposed MGFN outperforms
the state-of-the-art methods by up to 12.35% improvement.
Related papers
- Fine-Grained Urban Flow Inference with Multi-scale Representation Learning [14.673004628911443]
We propose an effective fine-grained urban flow inference model called UrbanMSR.
It uses self-supervised contrastive learning to obtain dynamic multi-scale representations of neighborhood-level and city-level geographic information.
We validate the performance through extensive experiments on three real-world datasets.
arXiv Detail & Related papers (2024-06-14T04:42:29Z) - Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning
Approach [9.56255685195115]
Mobility profiling can extract potential patterns in urban traffic from mobility data.
Digital twin (DT) technology paves the way for cost-effective and performance-optimised management.
We propose a digital twin mobility profiling framework to learn node profiles on a mobilitytemporal network DT model.
arXiv Detail & Related papers (2024-02-06T06:37:43Z) - Attentive Graph Enhanced Region Representation Learning [7.4106801792345705]
Representing urban regions accurately and comprehensively is essential for various urban planning and analysis tasks.
We propose the Attentive Graph Enhanced Region Representation Learning (ATGRL) model, which aims to capture comprehensive dependencies from multiple graphs and learn rich semantic representations of urban regions.
arXiv Detail & Related papers (2023-07-06T16:38:43Z) - Multi-Temporal Relationship Inference in Urban Areas [75.86026742632528]
Finding temporal relationships among locations can benefit a bunch of urban applications, such as dynamic offline advertising and smart public transport planning.
We propose a solution to Trial with a graph learning scheme, which includes a spatially evolving graph neural network (SEENet)
SEConv performs the intra-time aggregation and inter-time propagation to capture the multifaceted spatially evolving contexts from the view of location message passing.
SE-SSL designs time-aware self-supervised learning tasks in a global-local manner with additional evolving constraint to enhance the location representation learning and further handle the relationship sparsity.
arXiv Detail & Related papers (2023-06-15T07:48:32Z) - Semantic-Fused Multi-Granularity Cross-City Traffic Prediction [17.020546413647708]
We propose a Semantic-Fused Multi-Granularity Transfer Learning model to achieve knowledge transfer across cities with fused semantics at different granularities.
In detail, we design a semantic fusion module to fuse various semantics while conserving static spatial dependencies.
We conduct extensive experiments on six real-world datasets to verify the effectiveness of our STL model.
arXiv Detail & Related papers (2023-02-23T04:26:34Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Joint Demand Prediction for Multimodal Systems: A Multi-task
Multi-relational Spatiotemporal Graph Neural Network Approach [7.481812882780837]
This study proposes a multi-relational graph neural network (MRGNN) for multimodal demand prediction.
A multi-relational graph neural network (MRGNN) is introduced to capture cross-mode heterogeneous spatial dependencies.
Experiments are conducted using real-world datasets from New York City.
arXiv Detail & Related papers (2021-12-15T12:35:35Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.