Rank-based verification for long-term face tracking in crowded scenes
- URL: http://arxiv.org/abs/2107.13273v1
- Date: Wed, 28 Jul 2021 11:15:04 GMT
- Title: Rank-based verification for long-term face tracking in crowded scenes
- Authors: Germ\'an Barquero, Isabelle Hupont and Carles Fern\'andez
- Abstract summary: We present a long-term, multi-face tracking architecture conceived for working in crowded contexts.
Our system benefits from advances in the fields of face detection and face recognition to achieve long-term tracking.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most current multi-object trackers focus on short-term tracking, and are
based on deep and complex systems that often cannot operate in real-time,
making them impractical for video-surveillance. In this paper we present a
long-term, multi-face tracking architecture conceived for working in crowded
contexts where faces are often the only visible part of a person. Our system
benefits from advances in the fields of face detection and face recognition to
achieve long-term tracking, and is particularly unconstrained to the motion and
occlusions of people. It follows a tracking-by-detection approach, combining a
fast short-term visual tracker with a novel online tracklet reconnection
strategy grounded on rank-based face verification. The proposed rank-based
constraint favours higher inter-class distance among tracklets, and reduces the
propagation of errors due to wrong reconnections. Additionally, a correction
module is included to correct past assignments with no extra computational
cost. We present a series of experiments introducing novel specialized metrics
for the evaluation of long-term tracking capabilities, and publicly release a
video dataset with 10 manually annotated videos and a total length of 8' 54".
Our findings validate the robustness of each of the proposed modules, and
demonstrate that, in these challenging contexts, our approach yields up to 50%
longer tracks than state-of-the-art deep learning trackers.
Related papers
- LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [52.131996528655094]
We present the Long-term Effective Any Point Tracking (LEAP) module.
LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.
Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - MotionTrack: Learning Robust Short-term and Long-term Motions for
Multi-Object Tracking [56.92165669843006]
We propose MotionTrack, which learns robust short-term and long-term motions in a unified framework to associate trajectories from a short to long range.
For dense crowds, we design a novel Interaction Module to learn interaction-aware motions from short-term trajectories, which can estimate the complex movement of each target.
For extreme occlusions, we build a novel Refind Module to learn reliable long-term motions from the target's history trajectory, which can link the interrupted trajectory with its corresponding detection.
arXiv Detail & Related papers (2023-03-18T12:38:33Z) - Multi-view Tracking Using Weakly Supervised Human Motion Prediction [60.972708589814125]
We argue that an even more effective approach is to predict people motion over time and infer people's presence in individual frames from these.
This enables to enforce consistency both over time and across views of a single temporal frame.
We validate our approach on the PETS2009 and WILDTRACK datasets and demonstrate that it outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-10-19T17:58:23Z) - CoCoLoT: Combining Complementary Trackers in Long-Term Visual Tracking [17.2557973738397]
We propose a framework, named CoCoLoT, that combines the characteristics of complementary visual trackers to achieve enhanced long-term tracking performance.
CoCoLoT perceives whether the trackers are following the target object through an online learned deep verification model, and accordingly activates a decision policy.
The proposed methodology is evaluated extensively and the comparison with several other solutions reveals that it competes favourably with the state-of-the-art on the most popular long-term visual tracking benchmarks.
arXiv Detail & Related papers (2022-05-09T13:25:13Z) - Unsupervised Learning of Accurate Siamese Tracking [68.58171095173056]
We present a novel unsupervised tracking framework, in which we can learn temporal correspondence both on the classification branch and regression branch.
Our tracker outperforms preceding unsupervised methods by a substantial margin, performing on par with supervised methods on large-scale datasets such as TrackingNet and LaSOT.
arXiv Detail & Related papers (2022-04-04T13:39:43Z) - Continuity-Discrimination Convolutional Neural Network for Visual Object
Tracking [150.51667609413312]
This paper proposes a novel model, named Continuity-Discrimination Convolutional Neural Network (CD-CNN) for visual object tracking.
To address this problem, CD-CNN models temporal appearance continuity based on the idea of temporal slowness.
In order to alleviate inaccurate target localization and drifting, we propose a novel notion, object-centroid.
arXiv Detail & Related papers (2021-04-18T06:35:03Z) - Learning to Track with Object Permanence [61.36492084090744]
We introduce an end-to-end trainable approach for joint object detection and tracking.
Our model, trained jointly on synthetic and real data, outperforms the state of the art on KITTI, and MOT17 datasets.
arXiv Detail & Related papers (2021-03-26T04:43:04Z) - DEFT: Detection Embeddings for Tracking [3.326320568999945]
We propose an efficient joint detection and tracking model named DEFT.
Our approach relies on an appearance-based object matching network jointly-learned with an underlying object detection network.
DEFT has comparable accuracy and speed to the top methods on 2D online tracking leaderboards.
arXiv Detail & Related papers (2021-02-03T20:00:44Z) - Long-Term Face Tracking for Crowded Video-Surveillance Scenarios [0.0]
We present a long-term multi-face tracking architecture conceived for working in crowded contexts.
Our system benefits from advances in the fields of face detection and face recognition to achieve long-term tracking.
arXiv Detail & Related papers (2020-10-17T00:11:13Z) - Unsupervised Multiple Person Tracking using AutoEncoder-Based Lifted
Multicuts [11.72025865314187]
We present an unsupervised multiple object tracking approach based on minimum visual features and lifted multicuts.
We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
arXiv Detail & Related papers (2020-02-04T09:42:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.