No Identity, no problem: Motion through detection for people tracking
- URL: http://arxiv.org/abs/2411.16466v1
- Date: Mon, 25 Nov 2024 15:13:17 GMT
- Title: No Identity, no problem: Motion through detection for people tracking
- Authors: Martin Engilberge, F. Wilke Grosche, Pascal Fua,
- Abstract summary: We propose exploiting motion clues while providing supervision only for the detections.
Our algorithm predicts detection heatmaps at two different times, along with a 2D motion estimate between the two images.
We show that our approach delivers state-of-the-art results for single- and multi-view multi-target tracking on the MOT17 and WILDTRACK datasets.
- Score: 48.708733485434394
- License:
- Abstract: Tracking-by-detection has become the de facto standard approach to people tracking. To increase robustness, some approaches incorporate re-identification using appearance models and regressing motion offset, which requires costly identity annotations. In this paper, we propose exploiting motion clues while providing supervision only for the detections, which is much easier to do. Our algorithm predicts detection heatmaps at two different times, along with a 2D motion estimate between the two images. It then warps one heatmap using the motion estimate and enforces consistency with the other one. This provides the required supervisory signal on the motion without the need for any motion annotations. In this manner, we couple the information obtained from different images during training and increase accuracy, especially in crowded scenes and when using low frame-rate sequences. We show that our approach delivers state-of-the-art results for single- and multi-view multi-target tracking on the MOT17 and WILDTRACK datasets.
Related papers
- Walker: Self-supervised Multiple Object Tracking by Walking on Temporal Appearance Graphs [117.67620297750685]
We introduce Walker, the first self-supervised tracker that learns from videos with sparse bounding box annotations, and no tracking labels.
Walker is the first self-supervised tracker to achieve competitive performance on MOT17, DanceTrack, and BDD100K.
arXiv Detail & Related papers (2024-09-25T18:00:00Z) - DenseTrack: Drone-based Crowd Tracking via Density-aware Motion-appearance Synergy [33.57923199717605]
Drone-based crowd tracking faces difficulties in accurately identifying and monitoring objects from an aerial perspective.
To address these challenges, we present the Density-aware Tracking (DenseTrack) framework.
DenseTrack capitalizes on crowd counting to precisely determine object locations, blending visual and motion cues to improve the tracking of small-scale objects.
arXiv Detail & Related papers (2024-07-24T13:39:07Z) - CORT: Class-Oriented Real-time Tracking for Embedded Systems [46.3107850275261]
This work proposes a new approach to multi-class object tracking.
It allows achieving smaller and more predictable execution times, without penalizing the tracking performance.
The proposed solution was evaluated in complex urban scenarios with several objects of different types.
arXiv Detail & Related papers (2024-07-20T09:12:17Z) - Tracking by Associating Clips [110.08925274049409]
In this paper, we investigate an alternative by treating object association as clip-wise matching.
Our new perspective views a single long video sequence as multiple short clips, and then the tracking is performed both within and between the clips.
The benefits of this new approach are two folds. First, our method is robust to tracking error accumulation or propagation, as the video chunking allows bypassing the interrupted frames.
Second, the multiple frame information is aggregated during the clip-wise matching, resulting in a more accurate long-range track association than the current frame-wise matching.
arXiv Detail & Related papers (2022-12-20T10:33:17Z) - Multi-view Tracking Using Weakly Supervised Human Motion Prediction [60.972708589814125]
We argue that an even more effective approach is to predict people motion over time and infer people's presence in individual frames from these.
This enables to enforce consistency both over time and across views of a single temporal frame.
We validate our approach on the PETS2009 and WILDTRACK datasets and demonstrate that it outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-10-19T17:58:23Z) - SegTAD: Precise Temporal Action Detection via Semantic Segmentation [65.01826091117746]
We formulate the task of temporal action detection in a novel perspective of semantic segmentation.
Owing to the 1-dimensional property of TAD, we are able to convert the coarse-grained detection annotations to fine-grained semantic segmentation annotations for free.
We propose an end-to-end framework SegTAD composed of a 1D semantic segmentation network (1D-SSN) and a proposal detection network (PDN)
arXiv Detail & Related papers (2022-03-03T06:52:13Z) - DEFT: Detection Embeddings for Tracking [3.326320568999945]
We propose an efficient joint detection and tracking model named DEFT.
Our approach relies on an appearance-based object matching network jointly-learned with an underlying object detection network.
DEFT has comparable accuracy and speed to the top methods on 2D online tracking leaderboards.
arXiv Detail & Related papers (2021-02-03T20:00:44Z) - Tracking-by-Counting: Using Network Flows on Crowd Density Maps for
Tracking Multiple Targets [96.98888948518815]
State-of-the-art multi-object tracking(MOT) methods follow the tracking-by-detection paradigm.
We propose a new MOT paradigm, tracking-by-counting, tailored for crowded scenes.
arXiv Detail & Related papers (2020-07-18T19:51:53Z) - Joint Detection and Tracking in Videos with Identification Features [36.55599286568541]
We propose the first joint optimization of detection, tracking and re-identification features for videos.
Our method reaches the state-of-the-art on MOT, it ranks 1st in the UA-DETRAC'18 tracking challenge among online trackers, and 3rd overall.
arXiv Detail & Related papers (2020-05-21T21:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.