Coarse-to-Fine Object Tracking Using Deep Features and Correlation
Filters
- URL: http://arxiv.org/abs/2012.12784v1
- Date: Wed, 23 Dec 2020 16:43:21 GMT
- Title: Coarse-to-Fine Object Tracking Using Deep Features and Correlation
Filters
- Authors: Ahmed Zgaren, Wassim Bouachir, Riadh Ksantini
- Abstract summary: This paper presents a novel deep learning tracking algorithm.
We exploit the generalization ability of deep features to coarsely estimate target translation.
Then, we capitalize on the discriminative power of correlation filters to precisely localize the tracked object.
- Score: 2.3526458707956643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: During the last years, deep learning trackers achieved stimulating results
while bringing interesting ideas to solve the tracking problem. This progress
is mainly due to the use of learned deep features obtained by training deep
convolutional neural networks (CNNs) on large image databases. But since CNNs
were originally developed for image classification, appearance modeling
provided by their deep layers might be not enough discriminative for the
tracking task. In fact,such features represent high-level information, that is
more related to object category than to a specific instance of the object.
Motivated by this observation, and by the fact that discriminative correlation
filters(DCFs) may provide a complimentary low-level information, we presenta
novel tracking algorithm taking advantage of both approaches. We formulate the
tracking task as a two-stage procedure. First, we exploit the generalization
ability of deep features to coarsely estimate target translation, while
ensuring invariance to appearance change. Then, we capitalize on the
discriminative power of correlation filters to precisely localize the tracked
object. Furthermore, we designed an update control mechanism to learn
appearance change while avoiding model drift. We evaluated the proposed tracker
on object tracking benchmarks. Experimental results show the robustness of our
algorithm, which performs favorably against CNN and DCF-based trackers. Code is
available at: https://github.com/AhmedZgaren/Coarse-to-fine-Tracker
Related papers
- Once Detected, Never Lost: Surpassing Human Performance in Offline LiDAR
based 3D Object Detection [50.959453059206446]
This paper aims for high-performance offline LiDAR-based 3D object detection.
We first observe that experienced human annotators annotate objects from a track-centric perspective.
We propose a high-performance offline detector in a track-centric perspective instead of the conventional object-centric perspective.
arXiv Detail & Related papers (2023-04-24T17:59:05Z) - A Bayesian Detect to Track System for Robust Visual Object Tracking and
Semi-Supervised Model Learning [1.7268829007643391]
We ad-dress problems in a Bayesian tracking and detection framework parameterized by neural network outputs.
We propose a particle filter-based approximate sampling algorithm for tracking object state estimation.
Based on our particle filter inference algorithm, a semi-supervised learn-ing algorithm is utilized for learning tracking network on intermittent labeled frames.
arXiv Detail & Related papers (2022-05-05T00:18:57Z) - Correlation-Aware Deep Tracking [83.51092789908677]
We propose a novel target-dependent feature network inspired by the self-/cross-attention scheme.
Our network deeply embeds cross-image feature correlation in multiple layers of the feature network.
Our model can be flexibly pre-trained on abundant unpaired images, leading to notably faster convergence than the existing methods.
arXiv Detail & Related papers (2022-03-03T11:53:54Z) - Learning Dynamic Compact Memory Embedding for Deformable Visual Object
Tracking [82.34356879078955]
We propose a compact memory embedding to enhance the discrimination of the segmentation-based deformable visual tracking method.
Our method outperforms the excellent segmentation-based trackers, i.e., D3S and SiamMask on DAVIS 2017 benchmark.
arXiv Detail & Related papers (2021-11-23T03:07:12Z) - Video Annotation for Visual Tracking via Selection and Refinement [74.08109740917122]
We present a new framework to facilitate bounding box annotations for video sequences.
A temporal assessment network is proposed which is able to capture the temporal coherence of target locations.
A visual-geometry refinement network is also designed to further enhance the selected tracking results.
arXiv Detail & Related papers (2021-08-09T05:56:47Z) - Deep Feature Tracker: A Novel Application for Deep Convolutional Neural
Networks [0.0]
We propose a novel and unified deep learning-based approach that can learn how to track features reliably.
The proposed network dubbed as Deep-PT consists of a tracker network which is a convolutional neural network cross-correlation.
The network is trained using multiple datasets due to the lack of specialized dataset for feature tracking datasets.
arXiv Detail & Related papers (2021-07-30T23:24:29Z) - Occlusion Aware Kernel Correlation Filter Tracker using RGB-D [0.0]
This thesis first details the workings prototype of the Kernelized Correlation Filter tracker.
We investigate its effectiveness in real-time applications and supporting visualizations.
We also study the use of particle filters to improve trackers' accuracy.
arXiv Detail & Related papers (2021-05-25T18:37:39Z) - Multiple Convolutional Features in Siamese Networks for Object Tracking [13.850110645060116]
Multiple Features-Siamese Tracker (MFST) is a novel tracking algorithm exploiting several hierarchical feature maps for robust tracking.
MFST achieves high tracking accuracy, while outperforming the standard siamese tracker on object tracking benchmarks.
arXiv Detail & Related papers (2021-03-01T08:02:27Z) - Unsupervised Deep Representation Learning for Real-Time Tracking [137.69689503237893]
We propose an unsupervised learning method for visual tracking.
The motivation of our unsupervised learning is that a robust tracker should be effective in bidirectional tracking.
We build our framework on a Siamese correlation filter network, and propose a multi-frame validation scheme and a cost-sensitive loss to facilitate unsupervised learning.
arXiv Detail & Related papers (2020-07-22T08:23:12Z) - Robust Visual Object Tracking with Two-Stream Residual Convolutional
Networks [62.836429958476735]
We propose a Two-Stream Residual Convolutional Network (TS-RCN) for visual tracking.
Our TS-RCN can be integrated with existing deep learning based visual trackers.
To further improve the tracking performance, we adopt a "wider" residual network ResNeXt as its feature extraction backbone.
arXiv Detail & Related papers (2020-05-13T19:05:42Z) - Rethinking Convolutional Features in Correlation Filter Based Tracking [0.0]
We revisit a hierarchical deep feature-based visual tracker and find that both the performance and efficiency of the deep tracker are limited by the poor feature quality.
After removing redundant features, our proposed tracker achieves significant improvements in both performance and efficiency.
arXiv Detail & Related papers (2019-12-30T04:39:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.