TrafficMOT: A Challenging Dataset for Multi-Object Tracking in Complex
Traffic Scenarios
- URL: http://arxiv.org/abs/2311.18839v1
- Date: Thu, 30 Nov 2023 18:59:56 GMT
- Title: TrafficMOT: A Challenging Dataset for Multi-Object Tracking in Complex
Traffic Scenarios
- Authors: Lihao Liu, Yanqi Cheng, Zhongying Deng, Shujun Wang, Dongdong Chen,
Xiaowei Hu, Pietro Li\`o, Carola-Bibiane Sch\"onlieb, Angelica Aviles-Rivero
- Abstract summary: Multi-object tracking in traffic videos offers immense potential for enhancing traffic monitoring accuracy and promoting road safety measures.
Existing datasets for multi-object tracking in traffic videos often feature limited instances or focus on single classes.
We introduce TrafficMOT, an extensive dataset designed to encompass diverse traffic situations with complex scenarios.
- Score: 23.831048188389026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-object tracking in traffic videos is a crucial research area, offering
immense potential for enhancing traffic monitoring accuracy and promoting road
safety measures through the utilisation of advanced machine learning
algorithms. However, existing datasets for multi-object tracking in traffic
videos often feature limited instances or focus on single classes, which cannot
well simulate the challenges encountered in complex traffic scenarios. To
address this gap, we introduce TrafficMOT, an extensive dataset designed to
encompass diverse traffic situations with complex scenarios. To validate the
complexity and challenges presented by TrafficMOT, we conducted comprehensive
empirical studies using three different settings: fully-supervised,
semi-supervised, and a recent powerful zero-shot foundation model Tracking
Anything Model (TAM). The experimental results highlight the inherent
complexity of this dataset, emphasising its value in driving advancements in
the field of traffic monitoring and multi-object tracking.
Related papers
- TrafficGPT: Towards Multi-Scale Traffic Analysis and Generation with Spatial-Temporal Agent Framework [3.947797359736224]
We have designed a multi-scale traffic generation system, TrafficGPT, using three AI agents to process multi-scale traffic data.
TrafficGPT consists of three essential AI agents: 1) a text-to-demand agent to interact with users and extract prediction tasks through texts; 2) a traffic prediction agent that leverages multi-scale traffic data to generate temporal features and similarity; and 3) a suggestion and visualization agent that uses the prediction results to generate suggestions and visualizations.
arXiv Detail & Related papers (2024-05-08T07:48:40Z) - MTLight: Efficient Multi-Task Reinforcement Learning for Traffic Signal Control [56.545522358606924]
MTLight is proposed to enhance the agent observation with a latent state, which is learned from numerous traffic indicators.
Experiments conducted on CityFlow demonstrate that MTLight has leading convergence speed and performance.
arXiv Detail & Related papers (2024-04-01T03:27:46Z) - eTraM: Event-based Traffic Monitoring Dataset [23.978331129798356]
We present eTraM, a first-of-its-kind, fully event-based traffic monitoring dataset.
eTraM offers 10 hr of data from different traffic scenarios in various lighting and weather conditions.
It covers eight distinct classes of traffic participants, ranging from vehicles to pedestrians and micro-mobility.
arXiv Detail & Related papers (2024-03-29T04:58:56Z) - AIDE: A Vision-Driven Multi-View, Multi-Modal, Multi-Tasking Dataset for
Assistive Driving Perception [26.84439405241999]
We present an AssIstive Driving pErception dataset (AIDE) that considers context information both inside and outside the vehicle.
AIDE facilitates holistic driver monitoring through three distinctive characteristics.
Two fusion strategies are introduced to give new insights into learning effective multi-stream/modal representations.
arXiv Detail & Related papers (2023-07-26T03:12:05Z) - Traffic Scene Parsing through the TSP6K Dataset [109.69836680564616]
We introduce a specialized traffic monitoring dataset, termed TSP6K, with high-quality pixel-level and instance-level annotations.
The dataset captures more crowded traffic scenes with several times more traffic participants than the existing driving scenes.
We propose a detail refining decoder for scene parsing, which recovers the details of different semantic regions in traffic scenes.
arXiv Detail & Related papers (2023-03-06T02:05:14Z) - DIVOTrack: A Novel Dataset and Baseline Method for Cross-View
Multi-Object Tracking in DIVerse Open Scenes [74.64897845999677]
We introduce a new cross-view multi-object tracking dataset for DIVerse Open scenes with dense tracking pedestrians.
Our DIVOTrack has fifteen distinct scenarios and 953 cross-view tracks, surpassing all cross-view multi-object tracking datasets currently available.
Furthermore, we provide a novel baseline cross-view tracking method with a unified joint detection and cross-view tracking framework named CrossMOT.
arXiv Detail & Related papers (2023-02-15T14:10:42Z) - End-to-end Tracking with a Multi-query Transformer [96.13468602635082]
Multiple-object tracking (MOT) is a challenging task that requires simultaneous reasoning about location, appearance, and identity of the objects in the scene over time.
Our aim in this paper is to move beyond tracking-by-detection approaches, to class-agnostic tracking that performs well also for unknown object classes.
arXiv Detail & Related papers (2022-10-26T10:19:37Z) - Multi-intersection Traffic Optimisation: A Benchmark Dataset and a
Strong Baseline [85.9210953301628]
Control of traffic signals is fundamental and critical to alleviate traffic congestion in urban areas.
Because of the high complexity of modelling the problem, experimental settings of current works are often inconsistent.
We propose a novel and strong baseline model based on deep reinforcement learning with the encoder-decoder structure.
arXiv Detail & Related papers (2021-01-24T03:55:39Z) - Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous
Driving [22.693895321632507]
We propose a probabilistic, multi-modal, multi-object tracking system consisting of different trainable modules.
We show that our method outperforms current state-of-the-art on the NuScenes Tracking dataset.
arXiv Detail & Related papers (2020-12-26T15:00:54Z) - SoDA: Multi-Object Tracking with Soft Data Association [75.39833486073597]
Multi-object tracking (MOT) is a prerequisite for a safe deployment of self-driving cars.
We propose a novel approach to MOT that uses attention to compute track embeddings that encode dependencies between observed objects.
arXiv Detail & Related papers (2020-08-18T03:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.