Contrastive Trajectory Similarity Learning with Dual-Feature Attention
- URL: http://arxiv.org/abs/2210.05155v1
- Date: Tue, 11 Oct 2022 05:25:14 GMT
- Title: Contrastive Trajectory Similarity Learning with Dual-Feature Attention
- Authors: Yanchuan Chang, Jianzhong Qi, Yuxuan Liang, Egemen Tanin
- Abstract summary: Tray similarity measures act as query predicates in trajectory databases.
We propose a contrastive learning-based trajectory modelling method named TrajCL.
TrajCL is consistently and significantly more accurate and faster than the state-of-the-art trajectory similarity measures.
- Score: 24.445998309807965
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Trajectory similarity measures act as query predicates in trajectory
databases, making them the key player in determining the query results. They
also have a heavy impact on the query efficiency. An ideal measure should have
the capability to accurately evaluate the similarity between any two
trajectories in a very short amount of time. However, existing heuristic
measures are mainly based on pointwise comparisons following hand-crafted
rules, thus resulting in either poor quality results or low efficiency in many
cases. Although several deep learning-based measures have recently aimed at
these problems, their improvements are limited by the difficulties to learn the
fine-grained spatial patterns of trajectories.
To address these issues, we propose a contrastive learning-based trajectory
modelling method named TrajCL, which is robust in application scenarios where
the data set contains low-quality trajectories. Specifically, we present four
trajectory augmentation methods and a novel dual-feature self-attention-based
trajectory backbone encoder. The resultant model can jointly learn both the
spatial and the structural patterns of trajectories. Our model does not involve
any recurrent structures and thus has a high efficiency. Besides, our
pre-trained backbone encoder can be fine-tuned towards other computationally
expensive measures with minimal supervision data. Experimental results show
that TrajCL is consistently and significantly more accurate and faster than the
state-of-the-art trajectory similarity measures. After fine-tuning, i.e., when
being used as an estimator for heuristic measures, TrajCL can even outperform
the state-of-the-art supervised method by up to 32% in the accuracy for
processing trajectory similarity queries.
Related papers
- Data-driven Probabilistic Trajectory Learning with High Temporal Resolution in Terminal Airspace [9.688760969026305]
We propose a data-driven learning framework, that leverages the predictive and feature extraction capabilities of the mixture models and seq2seq-based neural networks.
After training with this framework, the learned model can improve long-step prediction accuracy significantly.
The accuracy and effectiveness of the approach are evaluated by comparing the predicted trajectories with the ground truth.
arXiv Detail & Related papers (2024-09-25T21:08:25Z) - T-JEPA: A Joint-Embedding Predictive Architecture for Trajectory Similarity Computation [6.844357745770191]
Trajectory similarity computation is an essential technique for analyzing moving patterns of spatial data across various applications.
We propose T-JEPA, a self-supervised trajectory similarity method employing Joint-Embedding Predictive Architecture (JEPA) to enhance trajectory representation learning.
arXiv Detail & Related papers (2024-06-13T09:51:51Z) - Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language Models as Agents [41.14201835950814]
Large language models (LLMs) have achieved success in acting as agents, which interact with environments through tools such as search engines.
Previous work has first collected interaction trajectories between LLMs and environments, using only trajectories that successfully finished the task to fine-tune smaller models.
We argue that unsuccessful trajectories offer valuable insights, and LLMs can learn from these trajectories through appropriate quality control and fine-tuning strategies.
arXiv Detail & Related papers (2024-02-18T17:10:07Z) - DTC: Deep Tracking Control [16.2850135844455]
We propose a hybrid control architecture that combines the advantages of both worlds to achieve greater robustness, foot-placement accuracy, and terrain generalization.
A deep neural network policy is trained in simulation, aiming to track the optimized footholds.
We demonstrate superior robustness in the presence of slippery or deformable ground when compared to model-based counterparts.
arXiv Detail & Related papers (2023-09-27T07:57:37Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map [58.53373202647576]
We propose PreTraM, a self-supervised pre-training scheme for trajectory forecasting.
It consists of two parts: 1) Trajectory-Map Contrastive Learning, where we project trajectories and maps to a shared embedding space with cross-modal contrastive learning, and 2) Map Contrastive Learning, where we enhance map representation with contrastive learning on large quantities of HD-maps.
On top of popular baselines such as AgentFormer and Trajectron++, PreTraM boosts their performance by 5.5% and 6.9% relatively in FDE-10 on the challenging nuScenes dataset.
arXiv Detail & Related papers (2022-04-21T23:01:21Z) - Trajectory Forecasting from Detection with Uncertainty-Aware Motion
Encoding [121.66374635092097]
Trajectories obtained from object detection and tracking are inevitably noisy.
We propose a trajectory predictor directly based on detection results without relying on explicitly formed trajectories.
arXiv Detail & Related papers (2022-02-03T09:09:56Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - An Unsupervised Learning Method with Convolutional Auto-Encoder for
Vessel Trajectory Similarity Computation [13.003061329076775]
We propose an unsupervised learning method which automatically extracts low-dimensional features through a convolutional auto-encoder (CAE)
Based on the massive vessel trajectories collected, the CAE can learn the low-dimensional representations of informative trajectory images in an unsupervised manner.
The proposed method largely outperforms traditional trajectory similarity methods in terms of efficiency and effectiveness.
arXiv Detail & Related papers (2021-01-10T04:42:11Z) - Tracking Performance of Online Stochastic Learners [57.14673504239551]
Online algorithms are popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
When a constant step-size is used, these algorithms also have the ability to adapt to drifts in problem parameters, such as data or model properties, and track the optimal solution with reasonable accuracy.
We establish a link between steady-state performance derived under stationarity assumptions and the tracking performance of online learners under random walk models.
arXiv Detail & Related papers (2020-04-04T14:16:27Z) - Pairwise Similarity Knowledge Transfer for Weakly Supervised Object
Localization [53.99850033746663]
We study the problem of learning localization model on target classes with weakly supervised image labels.
In this work, we argue that learning only an objectness function is a weak form of knowledge transfer.
Experiments on the COCO and ILSVRC 2013 detection datasets show that the performance of the localization model improves significantly with the inclusion of pairwise similarity function.
arXiv Detail & Related papers (2020-03-18T17:53:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.