Holistic Semantic Representation for Navigational Trajectory Generation
- URL: http://arxiv.org/abs/2501.02737v2
- Date: Tue, 11 Feb 2025 12:38:35 GMT
- Title: Holistic Semantic Representation for Navigational Trajectory Generation
- Authors: Ji Cao, Tongya Zheng, Qinghong Guo, Yu Wang, Junshu Dai, Shunyu Liu, Jie Yang, Jie Song, Mingli Song,
- Abstract summary: We develop a HOlistic SEmantic Representation (HOSER) framework for navigational generation.
We demonstrate that HOSER outperforms state-of-the-art baselines by a significant margin.
- Score: 33.55971756543447
- License:
- Abstract: Trajectory generation has garnered significant attention from researchers in the field of spatio-temporal analysis, as it can generate substantial synthesized human mobility trajectories that enhance user privacy and alleviate data scarcity. However, existing trajectory generation methods often focus on improving trajectory generation quality from a singular perspective, lacking a comprehensive semantic understanding across various scales. Consequently, we are inspired to develop a HOlistic SEmantic Representation (HOSER) framework for navigational trajectory generation. Given an origin-and-destination (OD) pair and the starting time point of a latent trajectory, we first propose a Road Network Encoder to expand the receptive field of road- and zone-level semantics. Second, we design a Multi-Granularity Trajectory Encoder to integrate the spatio-temporal semantics of the generated trajectory at both the point and trajectory levels. Finally, we employ a Destination-Oriented Navigator to seamlessly integrate destination-oriented guidance. Extensive experiments on three real-world datasets demonstrate that HOSER outperforms state-of-the-art baselines by a significant margin. Moreover, the model's performance in few-shot learning and zero-shot learning scenarios further verifies the effectiveness of our holistic semantic representation.
Related papers
- TrajLearn: Trajectory Prediction Learning using Deep Generative Models [4.097342535693401]
Trajectory prediction aims to estimate an entity's future path using its current position and historical movement data.
To address these challenges, we introduce TrajLearn, a novel model for trajectory prediction.
TrajLearn predicts the next $k$ steps by integrating a customized beam search for exploring multiple potential paths.
arXiv Detail & Related papers (2024-12-30T23:38:52Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - How To Not Train Your Dragon: Training-free Embodied Object Goal
Navigation with Semantic Frontiers [94.46825166907831]
We present a training-free solution to tackle the object goal navigation problem in Embodied AI.
Our method builds a structured scene representation based on the classic visual simultaneous localization and mapping (V-SLAM) framework.
Our method propagates semantics on the scene graphs based on language priors and scene statistics to introduce semantic knowledge to the geometric frontiers.
arXiv Detail & Related papers (2023-05-26T13:38:33Z) - DiffTraj: Generating GPS Trajectory with Diffusion Probabilistic Model [44.490978394267195]
We propose a spatial-temporal probabilistic model for trajectory generation (DiffTraj)
The core idea is to reconstruct and synthesize geographic trajectories from white noise through a reverse trajectory denoising process.
Experiments on two real-world datasets show that DiffTraj can be intuitively applied to generate high-fidelity trajectories.
arXiv Detail & Related papers (2023-04-23T08:42:45Z) - Rethinking Range View Representation for LiDAR Segmentation [66.73116059734788]
"Many-to-one" mapping, semantic incoherence, and shape deformation are possible impediments against effective learning from range view projections.
We present RangeFormer, a full-cycle framework comprising novel designs across network architecture, data augmentation, and post-processing.
We show that, for the first time, a range view method is able to surpass the point, voxel, and multi-view fusion counterparts in the competing LiDAR semantic and panoptic segmentation benchmarks.
arXiv Detail & Related papers (2023-03-09T16:13:27Z) - Continuous Trajectory Generation Based on Two-Stage GAN [50.55181727145379]
We propose a novel two-stage generative adversarial framework to generate the continuous trajectory on the road network.
Specifically, we build the generator under the human mobility hypothesis of the A* algorithm to learn the human mobility behavior.
For the discriminator, we combine the sequential reward with the mobility yaw reward to enhance the effectiveness of the generator.
arXiv Detail & Related papers (2023-01-16T09:54:02Z) - Self-supervised Trajectory Representation Learning with Temporal
Regularities and Travel Semantics [30.9735101687326]
Trajectory Representation Learning (TRL) is a powerful tool for spatial-temporal data analysis and management.
Existing TRL works usually treat trajectories as ordinary sequence data, while some important spatial-temporal characteristics, such as temporal regularities and travel semantics, are not fully exploited.
We propose a novel Self-supervised trajectory representation learning framework with TemporAl Regularities and Travel semantics, namely START.
arXiv Detail & Related papers (2022-11-17T13:14:47Z) - DouFu: A Double Fusion Joint Learning Method For Driving Trajectory
Representation [13.321587117066166]
We propose a novel multimodal fusion model, DouFu, for trajectory representation joint learning.
We first design movement, route, and global features generated from the trajectory data and urban functional zones.
With the global semantic feature, DouFu produces a comprehensive embedding for each trajectory.
arXiv Detail & Related papers (2022-05-05T07:43:35Z) - Adaptive Trajectory Prediction via Transferable GNN [74.09424229172781]
We propose a novel Transferable Graph Neural Network (T-GNN) framework, which jointly conducts trajectory prediction as well as domain alignment in a unified framework.
Specifically, a domain invariant GNN is proposed to explore the structural motion knowledge where the domain specific knowledge is reduced.
An attention-based adaptive knowledge learning module is further proposed to explore fine-grained individual-level feature representation for knowledge transfer.
arXiv Detail & Related papers (2022-03-09T21:08:47Z) - Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust
Road Extraction [110.61383502442598]
We introduce a novel neural network framework termed Cross-Modal Message Propagation Network (CMMPNet)
CMMPNet is composed of two deep Auto-Encoders for modality-specific representation learning and a tailor-designed Dual Enhancement Module for cross-modal representation refinement.
Experiments on three real-world benchmarks demonstrate the effectiveness of our CMMPNet for robust road extraction.
arXiv Detail & Related papers (2021-11-30T04:30:10Z) - A Deep Learning Framework for Generation and Analysis of Driving
Scenario Trajectories [2.908482270923597]
We propose a unified deep learning framework for the generation and analysis of driving scenario trajectories.
We experimentally investigate the performance of the proposed framework on real-world scenario trajectories obtained from in-field data collection.
arXiv Detail & Related papers (2020-07-28T23:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.