Event-Aware Multimodal Mobility Nowcasting
- URL: http://arxiv.org/abs/2112.08443v1
- Date: Tue, 14 Dec 2021 12:35:20 GMT
- Title: Event-Aware Multimodal Mobility Nowcasting
- Authors: Zhaonan Wang, Renhe Jiang, Hao Xue, Flora D. Salim, Xuan Song, Ryosuke
Shibasaki
- Abstract summary: Events-awaretemporal network EAST-Net is evaluated on real-world datasets with a wide variety and coverage of societal datasets.
Results verify the superiority of our approach compared with the state-of-the-art baselines.
- Score: 11.540605108140538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a decisive part in the success of Mobility-as-a-Service (MaaS),
spatio-temporal predictive modeling for crowd movements is a challenging task
particularly considering scenarios where societal events drive mobility
behavior deviated from the normality. While tremendous progress has been made
to model high-level spatio-temporal regularities with deep learning, most, if
not all of the existing methods are neither aware of the dynamic interactions
among multiple transport modes nor adaptive to unprecedented volatility brought
by potential societal events. In this paper, we are therefore motivated to
improve the canonical spatio-temporal network (ST-Net) from two perspectives:
(1) design a heterogeneous mobility information network (HMIN) to explicitly
represent intermodality in multimodal mobility; (2) propose a memory-augmented
dynamic filter generator (MDFG) to generate sequence-specific parameters in an
on-the-fly fashion for various scenarios. The enhanced event-aware
spatio-temporal network, namely EAST-Net, is evaluated on several real-world
datasets with a wide variety and coverage of societal events. Both quantitative
and qualitative experimental results verify the superiority of our approach
compared with the state-of-the-art baselines. Code and data are published on
https://github.com/underdoc-wang/EAST-Net.
Related papers
- DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States [6.856351850183536]
We introduce DeMo, a framework that decouples multi-modal trajectory queries into two types.
By leveraging this format, we separately optimize the multi-modality and dynamic evolutionary properties of trajectories.
We additionally introduce combined Attention and Mamba techniques for global information aggregation and state sequence modeling.
arXiv Detail & Related papers (2024-10-08T12:27:49Z) - A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.
To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.
We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - Unsupervised Representation Learning of Complex Time Series for Maneuverability State Identification in Smart Mobility [0.0]
In smart mobility, MTS plays a crucial role in providing temporal dynamics of behaviors such as maneuver patterns.
In this work, we aim to address challenges associated with modeling MTS data collected from a vehicle using sensors.
Our goal is to investigate the effectiveness of two distinct unsupervised representation learning approaches in identifying maneuvering states in smart mobility.
arXiv Detail & Related papers (2024-08-26T15:16:18Z) - Multi-Modality Spatio-Temporal Forecasting via Self-Supervised Learning [11.19088022423885]
We propose a novel MoST learning framework via Self-Supervised Learning, namely MoSSL.
Results on two real-world MoST datasets verify the superiority of our approach compared with the state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-06T08:24:06Z) - Rethinking Urban Mobility Prediction: A Super-Multivariate Time Series
Forecasting Approach [71.67506068703314]
Long-term urban mobility predictions play a crucial role in the effective management of urban facilities and services.
Traditionally, urban mobility data has been structured as videos, treating longitude and latitude as fundamental pixels.
In our research, we introduce a fresh perspective on urban mobility prediction.
Instead of oversimplifying urban mobility data as traditional video data, we regard it as a complex time series.
arXiv Detail & Related papers (2023-12-04T07:39:05Z) - Persistent-Transient Duality: A Multi-mechanism Approach for Modeling
Human-Object Interaction [58.67761673662716]
Humans are highly adaptable, swiftly switching between different modes to handle different tasks, situations and contexts.
In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline.
This work proposes to model two concurrent mechanisms that jointly control human motion.
arXiv Detail & Related papers (2023-07-24T12:21:33Z) - Dynamic Scenario Representation Learning for Motion Forecasting with
Heterogeneous Graph Convolutional Recurrent Networks [25.383615554172778]
We resort to dynamic heterogeneous graphs to model the evolving scenario.
We design a novel heterogeneous graphal recurrent network, aggregating diverse interaction information.
With a motion forecasting decoder, our model predicts realistic and multi-modal future trajectories of agents.
arXiv Detail & Related papers (2023-03-08T04:10:04Z) - Safety-compliant Generative Adversarial Networks for Human Trajectory
Forecasting [95.82600221180415]
Human forecasting in crowds presents the challenges of modelling social interactions and outputting collision-free multimodal distribution.
We introduce SGANv2, an improved safety-compliant SGAN architecture equipped with motion-temporal interaction modelling and a transformer-based discriminator design.
arXiv Detail & Related papers (2022-09-25T15:18:56Z) - Continuous-Time and Multi-Level Graph Representation Learning for
Origin-Destination Demand Prediction [52.0977259978343]
This paper proposes a Continuous-time and Multi-level dynamic graph representation learning method for Origin-Destination demand prediction (CMOD)
The state vectors keep historical transaction information and are continuously updated according to the most recently happened transactions.
Experiments are conducted on two real-world datasets from Beijing Subway and New York Taxi, and the results demonstrate the superiority of our model against the state-of-the-art approaches.
arXiv Detail & Related papers (2022-06-30T03:37:50Z) - SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction [72.37440317774556]
We propose advances that address two key challenges in future trajectory prediction.
multimodality in both training data and predictions and constant time inference regardless of number of agents.
arXiv Detail & Related papers (2020-07-26T08:17:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.