GCT-TTE: Graph Convolutional Transformer for Travel Time Estimation
- URL: http://arxiv.org/abs/2306.04324v2
- Date: Sun, 15 Oct 2023 08:30:00 GMT
- Title: GCT-TTE: Graph Convolutional Transformer for Travel Time Estimation
- Authors: Vladimir Mashurov, Vaagn Chopurian, Vadim Porvatov, Arseny Ivanov,
Natalia Semenova
- Abstract summary: This paper introduces a new transformer-based model for the problem of travel time estimation.
The proposed GCT-TTE architecture is the utilization of different data modalities capturing different properties of an input path.
GCT-TTE was deployed as a web service accessible for further experiments with user-defined routes.
- Score: 1.6499388997661122
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a new transformer-based model for the problem of travel
time estimation. The key feature of the proposed GCT-TTE architecture is the
utilization of different data modalities capturing different properties of an
input path. Along with the extensive study regarding the model configuration,
we implemented and evaluated a sufficient number of actual baselines for
path-aware and path-blind settings. The conducted computational experiments
have confirmed the viability of our pipeline, which outperformed
state-of-the-art models on both considered datasets. Additionally, GCT-TTE was
deployed as a web service accessible for further experiments with user-defined
routes.
Related papers
- TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - UniTraj: A Unified Framework for Scalable Vehicle Trajectory Prediction [93.77809355002591]
We introduce UniTraj, a comprehensive framework that unifies various datasets, models, and evaluation criteria.
We conduct extensive experiments and find that model performance significantly drops when transferred to other datasets.
We provide insights into dataset characteristics to explain these findings.
arXiv Detail & Related papers (2024-03-22T10:36:50Z) - MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - tsGT: Stochastic Time Series Modeling With Transformer [0.12905935507312413]
We introduce tsGT, a time series model built on a general-purpose transformer architecture.
We show that tsGT outperforms the state-of-the-art models on MAD and RMSE, and surpasses its peers on QL and CRPS, on four commonly used datasets.
arXiv Detail & Related papers (2024-03-08T22:59:41Z) - Online Test-Time Adaptation of Spatial-Temporal Traffic Flow Forecasting [13.770733370640565]
This paper conducts the first study of the online test-time adaptation techniques for spatial-temporal traffic flow forecasting problems.
We propose an Adaptive Double Correction by Series Decomposition (ADCSD) method, which first decomposes the output of the trained model into seasonal and trend-cyclical parts.
In the proposed ADCSD method, instead of fine-tuning the whole trained model during the testing phase, a lite network is attached after the trained model, and only the lite network is fine-tuned in the testing process each time a data entry is observed.
arXiv Detail & Related papers (2024-01-08T12:04:39Z) - VST++: Efficient and Stronger Visual Saliency Transformer [74.26078624363274]
We develop an efficient and stronger VST++ model to explore global long-range dependencies.
We evaluate our model across various transformer-based backbones on RGB, RGB-D, and RGB-T SOD benchmark datasets.
arXiv Detail & Related papers (2023-10-18T05:44:49Z) - Transformer-Based Neural Surrogate for Link-Level Path Loss Prediction
from Variable-Sized Maps [11.327456466796681]
Estimating path loss for a transmitter-receiver location is key to many use-cases including network planning and handover.
We present a transformer-based neural network architecture that enables predicting link-level properties from maps of various dimensions and from sparse measurements.
arXiv Detail & Related papers (2023-10-06T20:17:40Z) - Fine-Grained Trajectory-based Travel Time Estimation for Multi-city
Scenarios Based on Deep Meta-Learning [18.786481521834762]
Travel Time Estimation (TTE) is indispensable in intelligent transportation system (ITS)
It is significant to achieve the fine-grained Trajectory-based Travel Time Estimation (TTTE) for multi-city scenarios.
We propose a meta learning based framework, MetaTTE, to continuously provide accurate travel time estimation over time.
arXiv Detail & Related papers (2022-01-20T06:35:51Z) - PnP-DETR: Towards Efficient Visual Analysis with Transformers [146.55679348493587]
Recently, DETR pioneered the solution vision tasks with transformers, it directly translates the image feature map into the object result.
Recent transformer-based image recognition model andTT show consistent efficiency gain.
arXiv Detail & Related papers (2021-09-15T01:10:30Z) - Transformer-based Map Matching Model with Limited Ground-Truth Data
using Transfer-Learning Approach [6.510061176722248]
In many trajectory-based applications, it is necessary to map raw GPS trajectories onto road networks in digital maps.
In this paper, we consider the map-matching task from the data perspective, proposing a deep learning-based map-matching model.
We generate synthetic trajectory data to pre-train the Transformer model and then fine-tune the model with a limited number of ground-truth data.
arXiv Detail & Related papers (2021-08-01T11:51:11Z) - Vision Transformers are Robust Learners [65.91359312429147]
We study the robustness of the Vision Transformer (ViT) against common corruptions and perturbations, distribution shifts, and natural adversarial examples.
We present analyses that provide both quantitative and qualitative indications to explain why ViTs are indeed more robust learners.
arXiv Detail & Related papers (2021-05-17T02:39:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.