Robust Long-Term Object Tracking via Improved Discriminative Model
Prediction
- URL: http://arxiv.org/abs/2008.04722v2
- Date: Tue, 25 Aug 2020 15:37:50 GMT
- Title: Robust Long-Term Object Tracking via Improved Discriminative Model
Prediction
- Authors: Seokeon Choi, Junhyun Lee, Yunsung Lee, Alexander Hauptmann
- Abstract summary: We propose an improved discriminative model prediction method for robust long-term tracking based on a pre-trained short-term tracker.
The proposed method achieves comparable performance to the state-of-the-art long-term trackers.
- Score: 77.72450371348016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an improved discriminative model prediction method for robust
long-term tracking based on a pre-trained short-term tracker. The baseline
pre-trained short-term tracker is SuperDiMP which combines the bounding-box
regressor of PrDiMP with the standard DiMP classifier. Our tracker RLT-DiMP
improves SuperDiMP in the following three aspects: (1) Uncertainty reduction
using random erasing: To make our model robust, we exploit an agreement from
multiple images after erasing random small rectangular areas as a certainty.
And then, we correct the tracking state of our model accordingly. (2) Random
search with spatio-temporal constraints: we propose a robust random search
method with a score penalty applied to prevent the problem of sudden detection
at a distance. (3) Background augmentation for more discriminative feature
learning: We augment various backgrounds that are not included in the search
area to train a more robust model in the background clutter. In experiments on
the VOT-LT2020 benchmark dataset, the proposed method achieves comparable
performance to the state-of-the-art long-term trackers. The source code is
available at: https://github.com/bismex/RLT-DIMP.
Related papers
- Robust Visual Tracking via Iterative Gradient Descent and Threshold Selection [4.978166837959101]
We introduce a novel robust linear regression estimator, which achieves favorable performance when the error vector follows i.i.d Gaussian-Laplacian distribution.
In addition, we expend IGDTS to a generative tracker, and apply IGDTS-distance to measure the deviation between the sample and the model.
Experimental results on several challenging image sequences show that the proposed tracker outperformance existing trackers.
arXiv Detail & Related papers (2024-06-02T01:51:09Z) - 3DMOTFormer: Graph Transformer for Online 3D Multi-Object Tracking [15.330384668966806]
State-of-the-art 3D multi-object tracking (MOT) approaches typically rely on non-learned model-based algorithms such as Kalman Filter.
We propose 3DMOTFormer, a learned geometry-based 3D MOT framework building upon the transformer architecture.
Our approach achieves 71.2% and 68.2% AMOTA on the nuScenes validation and test split, respectively.
arXiv Detail & Related papers (2023-08-12T19:19:58Z) - TrajectoryFormer: 3D Object Tracking Transformer with Predictive
Trajectory Hypotheses [51.60422927416087]
3D multi-object tracking (MOT) is vital for many applications including autonomous driving vehicles and service robots.
We present TrajectoryFormer, a novel point-cloud-based 3D MOT framework.
arXiv Detail & Related papers (2023-06-09T13:31:50Z) - You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking [9.20064374262956]
The proposed framework can achieve robust tracking by using only a 2D detector and a 3D detector.
It is proven more accurate than many of the state-of-the-art TBD-based multi-modal tracking methods.
arXiv Detail & Related papers (2023-04-18T02:45:18Z) - Post-Processing Temporal Action Detection [134.26292288193298]
Temporal Action Detection (TAD) methods typically take a pre-processing step in converting an input varying-length video into a fixed-length snippet representation sequence.
This pre-processing step would temporally downsample the video, reducing the inference resolution and hampering the detection performance in the original temporal resolution.
We introduce a novel model-agnostic post-processing method without model redesign and retraining.
arXiv Detail & Related papers (2022-11-27T19:50:37Z) - Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box
Estimation [85.22775182688798]
This work proposes a novel, flexible, and accurate refinement module called Alpha-Refine.
It can significantly improve the base trackers' box estimation quality.
Experiments on TrackingNet, LaSOT, GOT-10K, and VOT 2020 benchmarks show that our approach significantly improves the base trackers' performance with little extra latency.
arXiv Detail & Related papers (2020-12-12T13:33:25Z) - Cascaded Regression Tracking: Towards Online Hard Distractor
Discrimination [202.2562153608092]
We propose a cascaded regression tracker with two sequential stages.
In the first stage, we filter out abundant easily-identified negative candidates.
In the second stage, a discrete sampling based ridge regression is designed to double-check the remaining ambiguous hard samples.
arXiv Detail & Related papers (2020-06-18T07:48:01Z) - LRPD: Long Range 3D Pedestrian Detection Leveraging Specific Strengths
of LiDAR and RGB [12.650574326251023]
The current state-of-the-art on the KITTI benchmark performs suboptimal in detecting the position of pedestrians at long range.
We propose an approach specifically targeting long range 3D pedestrian detection (LRPD), leveraging the density of RGB and the precision of LiDAR.
This leads to a significant improvement in mAP on long range compared to the current state-of-the art.
arXiv Detail & Related papers (2020-06-17T09:27:38Z) - ArTIST: Autoregressive Trajectory Inpainting and Scoring for Tracking [80.02322563402758]
One of the core components in online multiple object tracking (MOT) frameworks is associating new detections with existing tracklets.
We introduce a probabilistic autoregressive generative model to score tracklet proposals by directly measuring the likelihood that a tracklet represents natural motion.
arXiv Detail & Related papers (2020-04-16T06:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.