Beyond SOT: Tracking Multiple Generic Objects at Once
- URL: http://arxiv.org/abs/2212.11920v3
- Date: Sun, 25 Feb 2024 10:59:38 GMT
- Title: Beyond SOT: Tracking Multiple Generic Objects at Once
- Authors: Christoph Mayer and Martin Danelljan and Ming-Hsuan Yang and Vittorio
Ferrari and Luc Van Gool and Alina Kuznetsova
- Abstract summary: Generic Object Tracking (GOT) is the problem of tracking target objects, specified by bounding boxes in the first frame of a video.
We introduce a new large-scale GOT benchmark, LaGOT, containing multiple annotated target objects per sequence.
Our approach achieves highly competitive results on single-object GOT datasets, setting a new state of the art on TrackingNet with a success rate AUC of 84.4%.
- Score: 141.36900362724975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generic Object Tracking (GOT) is the problem of tracking target objects,
specified by bounding boxes in the first frame of a video. While the task has
received much attention in the last decades, researchers have almost
exclusively focused on the single object setting. Multi-object GOT benefits
from a wider applicability, rendering it more attractive in real-world
applications. We attribute the lack of research interest into this problem to
the absence of suitable benchmarks. In this work, we introduce a new
large-scale GOT benchmark, LaGOT, containing multiple annotated target objects
per sequence. Our benchmark allows users to tackle key remaining challenges in
GOT, aiming to increase robustness and reduce computation through joint
tracking of multiple objects simultaneously. In addition, we propose a
transformer-based GOT tracker baseline capable of joint processing of multiple
objects through shared computation. Our approach achieves a 4x faster run-time
in case of 10 concurrent objects compared to tracking each object independently
and outperforms existing single object trackers on our new benchmark. In
addition, our approach achieves highly competitive results on single-object GOT
datasets, setting a new state of the art on TrackingNet with a success rate AUC
of 84.4%. Our benchmark, code, and trained models will be made publicly
available.
Related papers
- Lost and Found: Overcoming Detector Failures in Online Multi-Object Tracking [15.533652456081374]
Multi-object tracking (MOT) endeavors to precisely estimate identities and positions of multiple objects over time.
Modern detectors may occasionally miss some objects in certain frames, causing trackers to cease tracking prematurely.
We propose BUSCA, meaning to search', a versatile framework compatible with any online TbD system.
arXiv Detail & Related papers (2024-07-14T10:45:12Z) - Tracking Reflected Objects: A Benchmark [12.770787846444406]
We introduce TRO, a benchmark specifically for Tracking Reflected Objects.
TRO includes 200 sequences with around 70,000 frames, each carefully annotated with bounding boxes.
To provide a stronger baseline, we propose a new tracker, HiP-HaTrack, which uses hierarchical features to improve performance.
arXiv Detail & Related papers (2024-07-07T02:22:45Z) - DIVOTrack: A Novel Dataset and Baseline Method for Cross-View
Multi-Object Tracking in DIVerse Open Scenes [74.64897845999677]
We introduce a new cross-view multi-object tracking dataset for DIVerse Open scenes with dense tracking pedestrians.
Our DIVOTrack has fifteen distinct scenarios and 953 cross-view tracks, surpassing all cross-view multi-object tracking datasets currently available.
Furthermore, we provide a novel baseline cross-view tracking method with a unified joint detection and cross-view tracking framework named CrossMOT.
arXiv Detail & Related papers (2023-02-15T14:10:42Z) - BURST: A Benchmark for Unifying Object Recognition, Segmentation and
Tracking in Video [58.71785546245467]
Multiple existing benchmarks involve tracking and segmenting objects in video.
There is little interaction between them due to the use of disparate benchmark datasets and metrics.
We propose BURST, a dataset which contains thousands of diverse videos with high-quality object masks.
All tasks are evaluated using the same data and comparable metrics, which enables researchers to consider them in unison.
arXiv Detail & Related papers (2022-09-25T01:27:35Z) - Cannot See the Forest for the Trees: Aggregating Multiple Viewpoints to
Better Classify Objects in Videos [36.28269135795851]
We present a set classifier that improves accuracy of classifying tracklets by aggregating information from multiple viewpoints contained in a tracklet.
By simply attaching our method to QDTrack on top of ResNet-101, we achieve the new state-of-the-art, 19.9% and 15.7% TrackAP_50 on TAO validation and test sets.
arXiv Detail & Related papers (2022-06-05T07:51:58Z) - Unified Transformer Tracker for Object Tracking [58.65901124158068]
We present the Unified Transformer Tracker (UTT) to address tracking problems in different scenarios with one paradigm.
A track transformer is developed in our UTT to track the target in both Single Object Tracking (SOT) and Multiple Object Tracking (MOT)
arXiv Detail & Related papers (2022-03-29T01:38:49Z) - MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking [72.76685780516371]
We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT)
The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community.
We provide a categorization of state-of-the-art trackers and a broad error analysis.
arXiv Detail & Related papers (2020-10-15T06:52:16Z) - End-to-End Multi-Object Tracking with Global Response Map [23.755882375664875]
We present a completely end-to-end approach that takes image-sequence/video as input and outputs directly the located and tracked objects of learned types.
Specifically, with our introduced multi-object representation strategy, a global response map can be accurately generated over frames.
Experimental results based on the MOT16 and MOT17 benchmarks show that our proposed on-line tracker achieved state-of-the-art performance on several tracking metrics.
arXiv Detail & Related papers (2020-07-13T12:30:49Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.