LightTrack: Finding Lightweight Neural Networks for Object Tracking via
One-Shot Architecture Search
- URL: http://arxiv.org/abs/2104.14545v1
- Date: Thu, 29 Apr 2021 17:55:24 GMT
- Title: LightTrack: Finding Lightweight Neural Networks for Object Tracking via
One-Shot Architecture Search
- Authors: Bin Yan, Houwen Peng, Kan Wu, Dong Wang, Jianlong Fu, Huchuan Lu
- Abstract summary: We present LightTrack, which uses neural architecture search (NAS) to design more lightweight and efficient object trackers.
Comprehensive experiments show that our LightTrack is effective.
It can find trackers that achieve superior performance compared to handcrafted SOTA trackers, such as SiamRPN++ and Ocean.
- Score: 104.84999119090887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object tracking has achieved significant progress over the past few years.
However, state-of-the-art trackers become increasingly heavy and expensive,
which limits their deployments in resource-constrained applications. In this
work, we present LightTrack, which uses neural architecture search (NAS) to
design more lightweight and efficient object trackers. Comprehensive
experiments show that our LightTrack is effective. It can find trackers that
achieve superior performance compared to handcrafted SOTA trackers, such as
SiamRPN++ and Ocean, while using much fewer model Flops and parameters.
Moreover, when deployed on resource-constrained mobile chipsets, the discovered
trackers run much faster. For example, on Snapdragon 845 Adreno GPU, LightTrack
runs $12\times$ faster than Ocean, while using $13\times$ fewer parameters and
$38\times$ fewer Flops. Such improvements might narrow the gap between academic
models and industrial deployments in object tracking task. LightTrack is
released at https://github.com/researchmm/LightTrack.
Related papers
- LITE: A Paradigm Shift in Multi-Object Tracking with Efficient ReID Feature Integration [0.3277163122167433]
Lightweight Integrated Tracking-Feature Extraction paradigm is introduced as a novel multi-object tracking (MOT) approach.
It enhances ReID-based trackers by eliminating inference, pre-processing, post-processing, and ReID model training costs.
arXiv Detail & Related papers (2024-09-06T11:05:12Z) - Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - Mamba-FETrack: Frame-Event Tracking via State Space Model [14.610806117193116]
This paper proposes a novel RGB-Event tracking framework, Mamba-FETrack, based on the State Space Model (SSM)
Specifically, we adopt two modality-specific Mamba backbone networks to extract the features of RGB frames and Event streams.
Extensive experiments on FELT and FE108 datasets fully validated the efficiency and effectiveness of our proposed tracker.
arXiv Detail & Related papers (2024-04-28T13:12:49Z) - OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning [33.521077115333696]
We present a general framework to unify various tracking tasks, termed as OneTracker.
OneTracker first performs a large-scale pre-training on a RGB tracker called Foundation Tracker.
Then we regard other modality information as prompt and build Prompt Tracker upon Foundation Tracker.
arXiv Detail & Related papers (2024-03-14T17:59:13Z) - Tracking with Human-Intent Reasoning [64.69229729784008]
This work proposes a new tracking task -- Instruction Tracking.
It involves providing implicit tracking instructions that require the trackers to perform tracking automatically in video frames.
TrackGPT is capable of performing complex reasoning-based tracking.
arXiv Detail & Related papers (2023-12-29T03:22:18Z) - LiteTrack: Layer Pruning with Asynchronous Feature Extraction for
Lightweight and Efficient Visual Tracking [4.179339279095506]
LiteTrack is an efficient transformer-based tracking model optimized for high-speed operations across various devices.
It achieves a more favorable trade-off between accuracy and efficiency than the other lightweight trackers.
LiteTrack-B9 reaches competitive 72.2% AO on GOT-10k and 82.4% AUC on TrackingNet, and operates at 171 fps on an NVIDIA 2080Ti GPU.
arXiv Detail & Related papers (2023-09-17T12:01:03Z) - CoTracker: It is Better to Track Together [70.63040730154984]
CoTracker is a transformer-based model that tracks a large number of 2D points in long video sequences.
We show that joint tracking significantly improves tracking accuracy and robustness, and allows CoTracker to track occluded points and points outside of the camera view.
arXiv Detail & Related papers (2023-07-14T21:13:04Z) - Efficient Visual Tracking with Exemplar Transformers [98.62550635320514]
We introduce the Exemplar Transformer, an efficient transformer for real-time visual object tracking.
E.T.Track, our visual tracker that incorporates Exemplar Transformer layers, runs at 47 fps on a CPU.
This is up to 8 times faster than other transformer-based models.
arXiv Detail & Related papers (2021-12-17T18:57:54Z) - High-Performance Long-Term Tracking with Meta-Updater [75.80564183653274]
Long-term visual tracking has drawn increasing attention because it is much closer to practical applications than short-term tracking.
Most top-ranked long-term trackers adopt the offline-trained Siamese architectures, thus, they cannot benefit from great progress of short-term trackers with online update.
We propose a novel offline-trained Meta-Updater to address an important but unsolved problem: Is the tracker ready for updating in the current frame?
arXiv Detail & Related papers (2020-04-01T09:29:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.