Rethinking Convolutional Features in Correlation Filter Based Tracking
- URL: http://arxiv.org/abs/1912.12811v1
- Date: Mon, 30 Dec 2019 04:39:38 GMT
- Title: Rethinking Convolutional Features in Correlation Filter Based Tracking
- Authors: Fang Liang, Wenjun Peng, Qinghao Liu, Haijin Wang
- Abstract summary: We revisit a hierarchical deep feature-based visual tracker and find that both the performance and efficiency of the deep tracker are limited by the poor feature quality.
After removing redundant features, our proposed tracker achieves significant improvements in both performance and efficiency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Both accuracy and efficiency are of significant importance to the task of
visual object tracking. In recent years, as the surge of deep learning, Deep
Convolutional NeuralNetwork (DCNN) becomes a very popular choice among the
tracking community. However, due to the high computational complexity,
end-to-end visual object trackers can hardly achieve an acceptable inference
time and therefore can difficult to be utilized in many real-world
applications. In this paper, we revisit a hierarchical deep feature-based
visual tracker and found that both the performance and efficiency of the deep
tracker are limited by the poor feature quality. Therefore, we propose a
feature selection module to select more discriminative features for the
trackers. After removing redundant features, our proposed tracker achieves
significant improvements in both performance and efficiency. Finally,
comparisons with state-of-the-art trackers are provided.
Related papers
- Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - FeatureSORT: Essential Features for Effective Tracking [0.0]
We introduce a novel tracker designed for online multiple object tracking with a focus on being simple, while being effective.
By integrating distinct appearance features, including clothing color, style, and target direction, our tracker significantly enhances online tracking accuracy.
arXiv Detail & Related papers (2024-07-05T04:37:39Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.
DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.
Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - CoCoLoT: Combining Complementary Trackers in Long-Term Visual Tracking [17.2557973738397]
We propose a framework, named CoCoLoT, that combines the characteristics of complementary visual trackers to achieve enhanced long-term tracking performance.
CoCoLoT perceives whether the trackers are following the target object through an online learned deep verification model, and accordingly activates a decision policy.
The proposed methodology is evaluated extensively and the comparison with several other solutions reveals that it competes favourably with the state-of-the-art on the most popular long-term visual tracking benchmarks.
arXiv Detail & Related papers (2022-05-09T13:25:13Z) - Correlation-Aware Deep Tracking [83.51092789908677]
We propose a novel target-dependent feature network inspired by the self-/cross-attention scheme.
Our network deeply embeds cross-image feature correlation in multiple layers of the feature network.
Our model can be flexibly pre-trained on abundant unpaired images, leading to notably faster convergence than the existing methods.
arXiv Detail & Related papers (2022-03-03T11:53:54Z) - Coarse-to-Fine Object Tracking Using Deep Features and Correlation
Filters [2.3526458707956643]
This paper presents a novel deep learning tracking algorithm.
We exploit the generalization ability of deep features to coarsely estimate target translation.
Then, we capitalize on the discriminative power of correlation filters to precisely localize the tracked object.
arXiv Detail & Related papers (2020-12-23T16:43:21Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z) - Robust Visual Object Tracking with Two-Stream Residual Convolutional
Networks [62.836429958476735]
We propose a Two-Stream Residual Convolutional Network (TS-RCN) for visual tracking.
Our TS-RCN can be integrated with existing deep learning based visual trackers.
To further improve the tracking performance, we adopt a "wider" residual network ResNeXt as its feature extraction backbone.
arXiv Detail & Related papers (2020-05-13T19:05:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.