Urban Region Representation Learning with Attentive Fusion
- URL: http://arxiv.org/abs/2312.04606v2
- Date: Fri, 26 Apr 2024 05:56:32 GMT
- Title: Urban Region Representation Learning with Attentive Fusion
- Authors: Fengze Sun, Jianzhong Qi, Yanchuan Chang, Xiaoliang Fan, Shanika Karunasekera, Egemen Tanin,
- Abstract summary: We propose a novel model for learning urban region representations, i.e., embeddings.
Our model is powered by a dual-feature attentive fusion module named DAFusion.
Using our learned region embedding leads to consistent and up to 31% improvements in the prediction accuracy.
- Score: 18.095344335507082
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: An increasing number of related urban data sources have brought forth novel opportunities for learning urban region representations, i.e., embeddings. The embeddings describe latent features of urban regions and enable discovering similar regions for urban planning applications. Existing methods learn an embedding for a region using every different type of region feature data, and subsequently fuse all learned embeddings of a region to generate a unified region embedding. However, these studies often overlook the significance of the fusion process. The typical fusion methods rely on simple aggregation, such as summation and concatenation, thereby disregarding correlations within the fused region embeddings. To address this limitation, we propose a novel model named HAFusion. Our model is powered by a dual-feature attentive fusion module named DAFusion, which fuses embeddings from different region features to learn higher-order correlations between the regions as well as between the different types of region features. DAFusion is generic - it can be integrated into existing models to enhance their fusion process. Further, motivated by the effective fusion capability of an attentive module, we propose a hybrid attentive feature learning module named HALearning to enhance the embedding learning from each individual type of region features. Extensive experiments on three real-world datasets demonstrate that our model HAFusion outperforms state-of-the-art methods across three different prediction tasks. Using our learned region embedding leads to consistent and up to 31% improvements in the prediction accuracy.
Related papers
- Explainable Hierarchical Urban Representation Learning for Commuting Flow Prediction [1.5156879440024378]
Commuting flow prediction is an essential task for municipal operations in the real world.
We develop a heterogeneous graph-based model to generate meaningful region embeddings for predicting different types of inter-level OD flows.
Our proposed model outperforms existing models in terms of a uniform urban structure.
arXiv Detail & Related papers (2024-08-27T03:30:01Z) - Fusion-Mamba for Cross-modality Object Detection [63.56296480951342]
Cross-modality fusing information from different modalities effectively improves object detection performance.
We design a Fusion-Mamba block (FMB) to map cross-modal features into a hidden state space for interaction.
Our proposed approach outperforms the state-of-the-art methods on $m$AP with 5.9% on $M3FD$ and 4.9% on FLIR-Aligned datasets.
arXiv Detail & Related papers (2024-04-14T05:28:46Z) - Bayesian Diffusion Models for 3D Shape Reconstruction [54.69889488052155]
We present a prediction algorithm that performs effective Bayesian inference by tightly coupling the top-down (prior) information with the bottom-up (data-driven) procedure.
We show the effectiveness of BDM on the 3D shape reconstruction task.
arXiv Detail & Related papers (2024-03-11T17:55:53Z) - Enhanced Urban Region Profiling with Adversarial Contrastive Learning [7.62909500335772]
EUPAC is a novel framework that enhances the robustness of urban region embeddings.
Our model generates region embeddings that preserve intra-region and inter-region dependencies.
Experiments on real-world datasets demonstrate the superiority of our model over state-of-the-art methods.
arXiv Detail & Related papers (2024-02-02T06:06:45Z) - Attentive Graph Enhanced Region Representation Learning [7.4106801792345705]
Representing urban regions accurately and comprehensively is essential for various urban planning and analysis tasks.
We propose the Attentive Graph Enhanced Region Representation Learning (ATGRL) model, which aims to capture comprehensive dependencies from multiple graphs and learn rich semantic representations of urban regions.
arXiv Detail & Related papers (2023-07-06T16:38:43Z) - GIVL: Improving Geographical Inclusivity of Vision-Language Models with
Pre-Training Methods [62.076647211744564]
We propose GIVL, a Geographically Inclusive Vision-and-Language Pre-trained model.
There are two attributes of geo-diverse visual concepts which can help to learn geo-diverse knowledge: 1) concepts under similar categories have unique knowledge and visual characteristics, 2) concepts with similar visual features may fall in completely different categories.
Compared with similar-size models pre-trained with similar scale of data, GIVL achieves state-of-the-art (SOTA) and more balanced performance on geo-diverse V&L tasks.
arXiv Detail & Related papers (2023-01-05T03:43:45Z) - Urban Region Profiling via A Multi-Graph Representation Learning
Framework [0.0]
We propose a multi-graph representative learning framework, called Region2Vec, for urban region profiling.
Experiments on real-world datasets show that Region2Vec can be employed in three applications and outperforms all state-of-the-art baselines.
arXiv Detail & Related papers (2022-02-04T11:05:37Z) - Multi-Graph Fusion Networks for Urban Region Embedding [40.97361959702485]
Learning embeddings for urban regions from human mobility data can reveal the functionality of regions, and then enables correlated but distinct tasks such as crime prediction.
We propose multi-graph fusion networks (MGFN) to enable the cross domain prediction tasks.
Experimental results demonstrate that the proposed MGFN outperforms the state-of-the-art methods by up to 12.35%.
arXiv Detail & Related papers (2022-01-24T15:48:50Z) - PRA-Net: Point Relation-Aware Network for 3D Point Cloud Analysis [56.91758845045371]
We propose a novel framework named Point Relation-Aware Network (PRA-Net)
It is composed of an Intra-region Structure Learning (ISL) module and an Inter-region Relation Learning (IRL) module.
Experiments on several 3D benchmarks covering shape classification, keypoint estimation, and part segmentation have verified the effectiveness and the ability of PRA-Net.
arXiv Detail & Related papers (2021-12-09T13:24:43Z) - Light Field Saliency Detection with Dual Local Graph Learning
andReciprocative Guidance [148.9832328803202]
We model the infor-mation fusion within focal stack via graph networks.
We build a novel dual graph modelto guide the focal stack fusion process using all-focus pat-terns.
arXiv Detail & Related papers (2021-10-02T00:54:39Z) - Dataset Cartography: Mapping and Diagnosing Datasets with Training
Dynamics [118.75207687144817]
We introduce Data Maps, a model-based tool to characterize and diagnose datasets.
We leverage a largely ignored source of information: the behavior of the model on individual instances during training.
Our results indicate that a shift in focus from quantity to quality of data could lead to robust models and improved out-of-distribution generalization.
arXiv Detail & Related papers (2020-09-22T20:19:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.