DenseTrack: Drone-based Crowd Tracking via Density-aware Motion-appearance Synergy
- URL: http://arxiv.org/abs/2407.17272v2
- Date: Fri, 26 Jul 2024 07:40:47 GMT
- Title: DenseTrack: Drone-based Crowd Tracking via Density-aware Motion-appearance Synergy
- Authors: Yi Lei, Huilin Zhu, Jingling Yuan, Guangli Xiang, Xian Zhong, Shengfeng He,
- Abstract summary: Drone-based crowd tracking faces difficulties in accurately identifying and monitoring objects from an aerial perspective.
To address these challenges, we present the Density-aware Tracking (DenseTrack) framework.
DenseTrack capitalizes on crowd counting to precisely determine object locations, blending visual and motion cues to improve the tracking of small-scale objects.
- Score: 33.57923199717605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Drone-based crowd tracking faces difficulties in accurately identifying and monitoring objects from an aerial perspective, largely due to their small size and close proximity to each other, which complicates both localization and tracking. To address these challenges, we present the Density-aware Tracking (DenseTrack) framework. DenseTrack capitalizes on crowd counting to precisely determine object locations, blending visual and motion cues to improve the tracking of small-scale objects. It specifically addresses the problem of cross-frame motion to enhance tracking accuracy and dependability. DenseTrack employs crowd density estimates as anchors for exact object localization within video frames. These estimates are merged with motion and position information from the tracking network, with motion offsets serving as key tracking cues. Moreover, DenseTrack enhances the ability to distinguish small-scale objects using insights from the visual-language model, integrating appearance with motion cues. The framework utilizes the Hungarian algorithm to ensure the accurate matching of individuals across frames. Demonstrated on DroneCrowd dataset, our approach exhibits superior performance, confirming its effectiveness in scenarios captured by drones.
Related papers
- No Identity, no problem: Motion through detection for people tracking [48.708733485434394]
We propose exploiting motion clues while providing supervision only for the detections.
Our algorithm predicts detection heatmaps at two different times, along with a 2D motion estimate between the two images.
We show that our approach delivers state-of-the-art results for single- and multi-view multi-target tracking on the MOT17 and WILDTRACK datasets.
arXiv Detail & Related papers (2024-11-25T15:13:17Z) - SCTracker: Multi-object tracking with shape and confidence constraints [11.210661553388615]
This paper proposes a multi-object tracker based on shape constraint and confidence named SCTracker.
Intersection of Union distance with shape constraints is applied to calculate the cost matrix between tracks and detections.
The Kalman Filter based on the detection confidence is used to update the motion state to improve the tracking performance when the detection has low confidence.
arXiv Detail & Related papers (2023-05-16T15:18:42Z) - CXTrack: Improving 3D Point Cloud Tracking with Contextual Information [59.55870742072618]
3D single object tracking plays an essential role in many applications, such as autonomous driving.
We propose CXTrack, a novel transformer-based network for 3D object tracking.
We show that CXTrack achieves state-of-the-art tracking performance while running at 29 FPS.
arXiv Detail & Related papers (2022-11-12T11:29:01Z) - Track without Appearance: Learn Box and Tracklet Embedding with Local
and Global Motion Patterns for Vehicle Tracking [45.524183249765244]
Vehicle tracking is an essential task in the multi-object tracking (MOT) field.
In this paper, we try to explore the significance of motion patterns for vehicle tracking without appearance information.
We propose a novel approach that tackles the association issue for long-term tracking with the exclusive fully-exploited motion information.
arXiv Detail & Related papers (2021-08-13T02:27:09Z) - Tracking by Joint Local and Global Search: A Target-aware Attention
based Approach [63.50045332644818]
We propose a novel target-aware attention mechanism (termed TANet) to conduct joint local and global search for robust tracking.
Specifically, we extract the features of target object patch and continuous video frames, then we track and feed them into a decoder network to generate target-aware global attention maps.
In the tracking procedure, we integrate the target-aware attention with multiple trackers by exploring candidate search regions for robust tracking.
arXiv Detail & Related papers (2021-06-09T06:54:15Z) - Track to Detect and Segment: An Online Multi-Object Tracker [81.15608245513208]
TraDeS is an online joint detection and tracking model, exploiting tracking clues to assist detection end-to-end.
TraDeS infers object tracking offset by a cost volume, which is used to propagate previous object features.
arXiv Detail & Related papers (2021-03-16T02:34:06Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - Tracking-by-Counting: Using Network Flows on Crowd Density Maps for
Tracking Multiple Targets [96.98888948518815]
State-of-the-art multi-object tracking(MOT) methods follow the tracking-by-detection paradigm.
We propose a new MOT paradigm, tracking-by-counting, tailored for crowded scenes.
arXiv Detail & Related papers (2020-07-18T19:51:53Z) - DroTrack: High-speed Drone-based Object Tracking Under Uncertainty [0.23204178451683263]
DroTrack is a high-speed visual single-object tracking framework for drone-captured video sequences.
We implement an effective object segmentation based on Fuzzy C Means.
We also leverage the geometrical angular motion to estimate a reliable object scale.
arXiv Detail & Related papers (2020-05-02T13:16:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.