Spatio-Temporal Contrastive Self-Supervised Learning for POI-level Crowd
Flow Inference
- URL: http://arxiv.org/abs/2309.03239v2
- Date: Tue, 12 Sep 2023 10:19:45 GMT
- Title: Spatio-Temporal Contrastive Self-Supervised Learning for POI-level Crowd
Flow Inference
- Authors: Songyu Ke, Ting Li, Li Song, Yanping Sun, Qintian Sun, Junbo Zhang, Yu
Zheng
- Abstract summary: We present a novel Contrastive Self-learning framework for S-temporal data (CSST)
Our approach initiates with the construction of a spatial adjacency graph founded on the Points of Interest (POIs) and their respective distances.
We adopt a swapped prediction approach to anticipate the representation of the target subgraph from similar instances.
Our experiments, conducted on two real-world datasets, demonstrate that the CSST pre-trained on extensive noisy data consistently outperforms models trained from scratch.
- Score: 23.8192952068949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate acquisition of crowd flow at Points of Interest (POIs) is pivotal
for effective traffic management, public service, and urban planning. Despite
this importance, due to the limitations of urban sensing techniques, the data
quality from most sources is inadequate for monitoring crowd flow at each POI.
This renders the inference of accurate crowd flow from low-quality data a
critical and challenging task. The complexity is heightened by three key
factors: 1) The scarcity and rarity of labeled data, 2) The intricate
spatio-temporal dependencies among POIs, and 3) The myriad correlations between
precise crowd flow and GPS reports.
To address these challenges, we recast the crowd flow inference problem as a
self-supervised attributed graph representation learning task and introduce a
novel Contrastive Self-learning framework for Spatio-Temporal data (CSST). Our
approach initiates with the construction of a spatial adjacency graph founded
on the POIs and their respective distances. We then employ a contrastive
learning technique to exploit large volumes of unlabeled spatio-temporal data.
We adopt a swapped prediction approach to anticipate the representation of the
target subgraph from similar instances. Following the pre-training phase, the
model is fine-tuned with accurate crowd flow data. Our experiments, conducted
on two real-world datasets, demonstrate that the CSST pre-trained on extensive
noisy data consistently outperforms models trained from scratch.
Related papers
- EasyST: A Simple Framework for Spatio-Temporal Prediction [18.291117879544945]
We propose a simple framework for spatial-temporal prediction - EasyST paradigm.
It learns lightweight and robust Multi-Layer Perceptrons (MLPs) generalization by distilling knowledge from complex-temporal GNNs.
EasyST surpasses state-of-the-art approaches in terms of efficiency and accuracy.
arXiv Detail & Related papers (2024-09-10T11:40:01Z) - Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - 4D Contrastive Superflows are Dense 3D Representation Learners [62.433137130087445]
We introduce SuperFlow, a novel framework designed to harness consecutive LiDAR-camera pairs for establishing pretraining objectives.
To further boost learning efficiency, we incorporate a plug-and-play view consistency module that enhances alignment of the knowledge distilled from camera views.
arXiv Detail & Related papers (2024-07-08T17:59:54Z) - SPOT: Scalable 3D Pre-training via Occupancy Prediction for Learning Transferable 3D Representations [76.45009891152178]
Pretraining-finetuning approach can alleviate the labeling burden by fine-tuning a pre-trained backbone across various downstream datasets as well as tasks.
We show, for the first time, that general representations learning can be achieved through the task of occupancy prediction.
Our findings will facilitate the understanding of LiDAR points and pave the way for future advancements in LiDAR pre-training.
arXiv Detail & Related papers (2023-09-19T11:13:01Z) - Correlating sparse sensing for large-scale traffic speed estimation: A
Laplacian-enhanced low-rank tensor kriging approach [76.45949280328838]
We propose a Laplacian enhanced low-rank tensor (LETC) framework featuring both lowrankness and multi-temporal correlations for large-scale traffic speed kriging.
We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging.
arXiv Detail & Related papers (2022-10-21T07:25:57Z) - Spatio-Temporal Graph Few-Shot Learning with Cross-City Knowledge
Transfer [58.6106391721944]
Cross-city knowledge has shown its promise, where the model learned from data-sufficient cities is leveraged to benefit the learning process of data-scarce cities.
We propose a model-agnostic few-shot learning framework for S-temporal graph called ST-GFSL.
We conduct comprehensive experiments on four traffic speed prediction benchmarks and the results demonstrate the effectiveness of ST-GFSL compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-05-27T12:46:52Z) - Building Autocorrelation-Aware Representations for Fine-Scale
Spatiotemporal Prediction [1.2862507359003323]
We present a novel deep learning architecture that incorporates theories of spatial statistics into neural networks.
DeepLATTE contains an autocorrelation-guided semi-supervised learning strategy to enforce both local autocorrelation patterns and global autocorrelation trends.
We conduct a demonstration of DeepLATTE using publicly available data for an important public health topic, air quality prediction in a well-fitting, complex physical environment.
arXiv Detail & Related papers (2021-12-10T03:21:19Z) - STJLA: A Multi-Context Aware Spatio-Temporal Joint Linear Attention
Network for Traffic Forecasting [7.232141271583618]
We propose a novel deep learning model for traffic forecasting named inefficient-Context Spatio-Temporal Joint Linear Attention (SSTLA)
SSTLA applies linear attention to a joint graph to capture global dependence between alltemporal- nodes efficiently.
Experiments on two real-world traffic datasets, England and Temporal7, demonstrate that our STJLA can achieve 9.83% and 3.08% 3.08% accuracy in MAE measure over state-of-the-art baselines.
arXiv Detail & Related papers (2021-12-04T06:39:18Z) - Incorporating Reachability Knowledge into a Multi-Spatial Graph
Convolution Based Seq2Seq Model for Traffic Forecasting [12.626657411944949]
Existing works cannot perform well for multi-step traffic prediction that involves long future time period.
Our model is evaluated on two real world traffic datasets and better performance than other competitors.
arXiv Detail & Related papers (2021-07-04T03:23:30Z) - Self-Point-Flow: Self-Supervised Scene Flow Estimation from Point Clouds
with Optimal Transport and Random Walk [59.87525177207915]
We develop a self-supervised method to establish correspondences between two point clouds to approximate scene flow.
Our method achieves state-of-the-art performance among self-supervised learning methods.
arXiv Detail & Related papers (2021-05-18T03:12:42Z) - Interpretable Crowd Flow Prediction with Spatial-Temporal Self-Attention [16.49833154469825]
The most challenging part of predicting crowd flow is to measure the complicated spatial-temporal dependencies.
We propose a Spatial-Temporal Self-Attention Network (STSAN) with an ST encoding gate that calculates the entire spatial-temporal representation.
Experimental results on traffic and mobile data demonstrate that the proposed method reduces inflow and outflow RMSE by 16% and 8% on the Taxi-NYC dataset.
arXiv Detail & Related papers (2020-02-22T12:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.