SMA-Hyper: Spatiotemporal Multi-View Fusion Hypergraph Learning for Traffic Accident Prediction
- URL: http://arxiv.org/abs/2407.17642v1
- Date: Wed, 24 Jul 2024 21:10:34 GMT
- Title: SMA-Hyper: Spatiotemporal Multi-View Fusion Hypergraph Learning for Traffic Accident Prediction
- Authors: Xiaowei Gao, James Haworth, Ilya Ilyankou, Xianghui Zhang, Tao Cheng, Stephen Law, Huanfa Chen,
- Abstract summary: Current data-driven models often struggle with data sparsity and the integration of diverse urban data sources.
We introduce a deep dynamic learning framework designed for traffic accident prediction.
It incorporates dual adaptive graph learning mechanisms that enable high-order cross-regional learning.
It also employs an advance attention mechanism to fuse multiple views of accident data and urban functional features.
- Score: 2.807532512532818
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting traffic accidents is the key to sustainable city management, which requires effective address of the dynamic and complex spatiotemporal characteristics of cities. Current data-driven models often struggle with data sparsity and typically overlook the integration of diverse urban data sources and the high-order dependencies within them. Additionally, they frequently rely on predefined topologies or weights, limiting their adaptability in spatiotemporal predictions. To address these issues, we introduce the Spatiotemporal Multiview Adaptive HyperGraph Learning (SMA-Hyper) model, a dynamic deep learning framework designed for traffic accident prediction. Building on previous research, this innovative model incorporates dual adaptive spatiotemporal graph learning mechanisms that enable high-order cross-regional learning through hypergraphs and dynamic adaptation to evolving urban data. It also utilises contrastive learning to enhance global and local data representations in sparse datasets and employs an advance attention mechanism to fuse multiple views of accident data and urban functional features, thereby enriching the contextual understanding of risk factors. Extensive testing on the London traffic accident dataset demonstrates that the SMA-Hyper model significantly outperforms baseline models across various temporal horizons and multistep outputs, affirming the effectiveness of its multiview fusion and adaptive learning strategies. The interpretability of the results further underscores its potential to improve urban traffic management and safety by leveraging complex spatiotemporal urban data, offering a scalable framework adaptable to diverse urban environments.
Related papers
- Physics-guided Active Sample Reweighting for Urban Flow Prediction [75.24539704456791]
Urban flow prediction is a nuanced-temporal modeling that estimates the throughput of transportation services like buses, taxis and ride-driven models.
Some recent prediction solutions bring remedies with the notion of physics-guided machine learning (PGML)
We develop a atized physics-guided network (PN), and propose a data-aware framework Physics-guided Active Sample Reweighting (P-GASR)
arXiv Detail & Related papers (2024-07-18T15:44:23Z) - UrbanGPT: Spatio-Temporal Large Language Models [34.79169613947957]
We present the UrbanPT, which seamlessly integrates atemporal-temporal encoder with instruction-tuning paradigm.
We conduct extensive experiments on various public datasets, covering differenttemporal prediction tasks.
The results consistently demonstrate that our UrbanPT, with its carefully designed architecture, consistently outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-25T12:37:29Z) - Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning
Approach [9.56255685195115]
Mobility profiling can extract potential patterns in urban traffic from mobility data.
Digital twin (DT) technology paves the way for cost-effective and performance-optimised management.
We propose a digital twin mobility profiling framework to learn node profiles on a mobilitytemporal network DT model.
arXiv Detail & Related papers (2024-02-06T06:37:43Z) - Hybrid Transformer and Spatial-Temporal Self-Supervised Learning for
Long-term Traffic Prediction [1.8531577178922987]
We propose a model that combines hybrid Transformer and self-supervised learning.
The model enhances its adaptive data augmentation by applying data augmentation techniques at the sequence-level of the traffic.
We design two self-supervised learning tasks to model the temporal and spatial dependencies, thereby improving the accuracy and ability of the model.
arXiv Detail & Related papers (2024-01-29T06:17:23Z) - Rethinking Urban Mobility Prediction: A Super-Multivariate Time Series
Forecasting Approach [71.67506068703314]
Long-term urban mobility predictions play a crucial role in the effective management of urban facilities and services.
Traditionally, urban mobility data has been structured as videos, treating longitude and latitude as fundamental pixels.
In our research, we introduce a fresh perspective on urban mobility prediction.
Instead of oversimplifying urban mobility data as traditional video data, we regard it as a complex time series.
arXiv Detail & Related papers (2023-12-04T07:39:05Z) - Spatio-Temporal Meta Contrastive Learning [18.289397543341707]
We propose a new-temporal contrastive learning framework to encode robust and generalizable S-temporal Graph representations.
We show that our framework significantly improves performance over various state-of-the-art baselines in traffic crime prediction.
arXiv Detail & Related papers (2023-10-26T04:56:31Z) - Unified Data Management and Comprehensive Performance Evaluation for
Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark] [78.05103666987655]
This work addresses challenges in accessing and utilizing diverse urban spatial-temporal datasets.
We introduceatomic files, a unified storage format designed for urban spatial-temporal big data, and validate its effectiveness on 40 diverse datasets.
We conduct extensive experiments using diverse models and datasets, establishing a performance leaderboard and identifying promising research directions.
arXiv Detail & Related papers (2023-08-24T16:20:00Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Spatio-Temporal Graph Few-Shot Learning with Cross-City Knowledge
Transfer [58.6106391721944]
Cross-city knowledge has shown its promise, where the model learned from data-sufficient cities is leveraged to benefit the learning process of data-scarce cities.
We propose a model-agnostic few-shot learning framework for S-temporal graph called ST-GFSL.
We conduct comprehensive experiments on four traffic speed prediction benchmarks and the results demonstrate the effectiveness of ST-GFSL compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-05-27T12:46:52Z) - Spatial-Temporal Sequential Hypergraph Network for Crime Prediction [56.41899180029119]
We propose Spatial-Temporal Sequential Hypergraph Network (ST-SHN) to collectively encode complex crime spatial-temporal patterns.
In particular, to handle spatial-temporal dynamics under the long-range and global context, we design a graph-structured message passing architecture.
We conduct extensive experiments on two real-world datasets, showing that our proposed ST-SHN framework can significantly improve the prediction performance.
arXiv Detail & Related papers (2022-01-07T12:46:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.