Is This Tracker On? A Benchmark Protocol for Dynamic Tracking
- URL: http://arxiv.org/abs/2510.19819v1
- Date: Wed, 22 Oct 2025 17:53:56 GMT
- Title: Is This Tracker On? A Benchmark Protocol for Dynamic Tracking
- Authors: Ilona Demler, Saumya Chauhan, Georgia Gkioxari,
- Abstract summary: ITTO is a new benchmark suite for evaluating and diagnosing the capabilities and limitations of point tracking methods.<n>We conduct a rigorous analysis of state-of-the-art tracking methods on ITTO, breaking down performance along key axes of motion complexity.
- Score: 6.23176842962524
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce ITTO, a challenging new benchmark suite for evaluating and diagnosing the capabilities and limitations of point tracking methods. Our videos are sourced from existing datasets and egocentric real-world recordings, with high-quality human annotations collected through a multi-stage pipeline. ITTO captures the motion complexity, occlusion patterns, and object diversity characteristic of real-world scenes -- factors that are largely absent in current benchmarks. We conduct a rigorous analysis of state-of-the-art tracking methods on ITTO, breaking down performance along key axes of motion complexity. Our findings reveal that existing trackers struggle with these challenges, particularly in re-identifying points after occlusion, highlighting critical failure modes. These results point to the need for new modeling approaches tailored to real-world dynamics. We envision ITTO as a foundation testbed for advancing point tracking and guiding the development of more robust tracking algorithms.
Related papers
- CoWTracker: Tracking by Warping instead of Correlation [53.834673070954494]
We propose a dense point tracker that eschews cost volumes in favor of warping.<n>Inspired by recent advances in optical flow, our approach iteratively refines track estimates by warping features from the target frame to the query frame based on the current estimate.<n>Our model is simple and achieves state-of-the-art performance on standard dense point tracking benchmarks, including TAP-Vid-DAVIS, TAP-Vid-Kinetics, and Robo-TAP.
arXiv Detail & Related papers (2026-02-04T18:58:59Z) - SynthVerse: A Large-Scale Diverse Synthetic Dataset for Point Tracking [61.01458607791313]
We introduce SynthVerse, a large-scale, diverse synthetic dataset specifically designed for point tracking.<n> SynthVerse substantially expands dataset diversity by covering a broader range of object categories.<n>We establish a highly diverse point tracking benchmark to systematically evaluate state-of-the-art methods.
arXiv Detail & Related papers (2026-02-04T11:14:21Z) - Delving into Dynamic Scene Cue-Consistency for Robust 3D Multi-Object Tracking [16.366398265001422]
3D multi-object tracking is a critical and challenging task in the field of autonomous driving.<n>We introduce the Dynamic Scene Cue-Consistency Tracker (DSC-Track) to implement this principle.
arXiv Detail & Related papers (2025-08-15T08:48:13Z) - Head Anchor Enhanced Detection and Association for Crowded Pedestrian Tracking [8.653608112604472]
The proposed method incorporates detection features from both the regression and classification branches of an object detector.<n>In terms of motion modeling, we propose an iterative Kalman filtering approach designed to align with modern detector assumptions.
arXiv Detail & Related papers (2025-08-07T15:47:34Z) - What You Have is What You Track: Adaptive and Robust Multimodal Tracking [72.92244578461869]
We present the first comprehensive study on tracker performance with temporally incomplete multimodal data.<n>Our model achieves SOTA performance across 9 benchmarks, excelling in both conventional complete and missing modality settings.
arXiv Detail & Related papers (2025-07-08T11:40:21Z) - MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving [10.399817864597347]
This paper introduces MCTrack, a new 3D multi-object tracking method that achieves state-of-the-art (SOTA) performance across KITTI, nuScenes, and datasets.
arXiv Detail & Related papers (2024-09-23T11:26:01Z) - LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [52.131996528655094]
We present the Long-term Effective Any Point Tracking (LEAP) module.
LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.
Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - End-to-end Tracking with a Multi-query Transformer [96.13468602635082]
Multiple-object tracking (MOT) is a challenging task that requires simultaneous reasoning about location, appearance, and identity of the objects in the scene over time.
Our aim in this paper is to move beyond tracking-by-detection approaches, to class-agnostic tracking that performs well also for unknown object classes.
arXiv Detail & Related papers (2022-10-26T10:19:37Z) - DEFT: Detection Embeddings for Tracking [3.326320568999945]
We propose an efficient joint detection and tracking model named DEFT.
Our approach relies on an appearance-based object matching network jointly-learned with an underlying object detection network.
DEFT has comparable accuracy and speed to the top methods on 2D online tracking leaderboards.
arXiv Detail & Related papers (2021-02-03T20:00:44Z) - Benchmarking Unsupervised Object Representations for Video Sequences [111.81492107649889]
We compare the perceptual abilities of four object-centric approaches: ViMON, OP3, TBA and SCALOR.
Our results suggest that the architectures with unconstrained latent representations learn more powerful representations in terms of object detection, segmentation and tracking.
Our benchmark may provide fruitful guidance towards learning more robust object-centric video representations.
arXiv Detail & Related papers (2020-06-12T09:37:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.