Scalable Trajectory-User Linking with Dual-Stream Representation Networks
- URL: http://arxiv.org/abs/2503.15002v1
- Date: Wed, 19 Mar 2025 08:52:23 GMT
- Title: Scalable Trajectory-User Linking with Dual-Stream Representation Networks
- Authors: Hao Zhang, Wei Chen, Xingyu Zhao, Jianpeng Qi, Guiyuan Jiang, Yanwei Yu,
- Abstract summary: Trajectory-user linking (TUL) aims to match anonymous trajectories to the most likely users who generated them.<n>Existing TUL methods are limited by high model complexity and poor learning of the effective representations of trajectories.<n>We propose a novel $underline$$underlinee$ Trajectory-User Linking with dual-stream representation networks.
- Score: 18.563941434746784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory-user linking (TUL) aims to match anonymous trajectories to the most likely users who generated them, offering benefits for a wide range of real-world spatio-temporal applications. However, existing TUL methods are limited by high model complexity and poor learning of the effective representations of trajectories, rendering them ineffective in handling large-scale user trajectory data. In this work, we propose a novel $\underline{Scal}$abl$\underline{e}$ Trajectory-User Linking with dual-stream representation networks for large-scale $\underline{TUL}$ problem, named ScaleTUL. Specifically, ScaleTUL generates two views using temporal and spatial augmentations to exploit supervised contrastive learning framework to effectively capture the irregularities of trajectories. In each view, a dual-stream trajectory encoder, consisting of a long-term encoder and a short-term encoder, is designed to learn unified trajectory representations that fuse different temporal-spatial dependencies. Then, a TUL layer is used to associate the trajectories with the corresponding users in the representation space using a two-stage training model. Experimental results on check-in mobility datasets from three real-world cities and the nationwide U.S. demonstrate the superiority of ScaleTUL over state-of-the-art baselines for large-scale TUL tasks.
Related papers
- HGTUL: A Hypergraph-based Model For Trajectory User Linking [2.9945319641858985]
Tray User Linking (TUL) links anonymous trajectories with users who generate them.<n>We propose a novel HyperGraph-based multi-perspective Trajectory User Linking model (HGTUL)
arXiv Detail & Related papers (2025-02-11T13:39:35Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation [34.918489559139715]
Universal Vehicle Trajectory (UVTM) is designed to support different tasks based on incomplete or sparse trajectories.
To handle sparse trajectories effectively, UVTM is pre-trained by reconstructing densely sampled trajectories from sparsely sampled ones.
arXiv Detail & Related papers (2024-02-11T15:49:50Z) - Trajectory-User Linking via Hierarchical Spatio-Temporal Attention
Networks [39.6505270702036]
Trajectory-User Linking (TUL) is crucial for human mobility modeling by linking trajectories to users.
Existing works mainly rely on the neural framework to encode the temporal dependencies in trajectories.
This work presents a new hierarchicaltemporal attention neural network called AttnTUL to encode the local trajectory transitional patterns and global spatial dependencies for TUL.
arXiv Detail & Related papers (2023-02-11T06:22:50Z) - Self-supervised Trajectory Representation Learning with Temporal
Regularities and Travel Semantics [30.9735101687326]
Trajectory Representation Learning (TRL) is a powerful tool for spatial-temporal data analysis and management.
Existing TRL works usually treat trajectories as ordinary sequence data, while some important spatial-temporal characteristics, such as temporal regularities and travel semantics, are not fully exploited.
We propose a novel Self-supervised trajectory representation learning framework with TemporAl Regularities and Travel semantics, namely START.
arXiv Detail & Related papers (2022-11-17T13:14:47Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - Mutual Distillation Learning Network for Trajectory-User Linking [30.954285341714]
Trajectory-User Linking (TUL) has been a challenging problem due to the sparsity in check-in mobility data.
We propose a novel Mutual distillation learning network to solve the TUL problem for sparse check-in mobility data, named MainTUL.
arXiv Detail & Related papers (2022-05-08T03:50:37Z) - Large Scale Time-Series Representation Learning via Simultaneous Low and
High Frequency Feature Bootstrapping [7.0064929761691745]
We propose a non-contrastive self-supervised learning approach efficiently captures low and high-frequency time-varying features.
Our method takes raw time series data as input and creates two different augmented views for two branches of the model.
To demonstrate the robustness of our model we performed extensive experiments and ablation studies on five real-world time-series datasets.
arXiv Detail & Related papers (2022-04-24T14:39:47Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - Spatial-Temporal Correlation and Topology Learning for Person
Re-Identification in Videos [78.45050529204701]
We propose a novel framework to pursue discriminative and robust representation by modeling cross-scale spatial-temporal correlation.
CTL utilizes a CNN backbone and a key-points estimator to extract semantic local features from human body.
It explores a context-reinforced topology to construct multi-scale graphs by considering both global contextual information and physical connections of human body.
arXiv Detail & Related papers (2021-04-15T14:32:12Z) - Hierarchical Optimal Transport for Robust Multi-View Learning [97.21355697826345]
Two assumptions may be questionable in practice, which limits the application of multi-view learning.
We propose a hierarchical optimal transport (HOT) method to mitigate the dependency on these two assumptions.
The HOT method is applicable to both unsupervised and semi-supervised learning, and experimental results show that it performs robustly on both synthetic and real-world tasks.
arXiv Detail & Related papers (2020-06-04T22:24:45Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.