SDOF-Tracker: Fast and Accurate Multiple Human Tracking by
Skipped-Detection and Optical-Flow
- URL: http://arxiv.org/abs/2106.14259v2
- Date: Tue, 29 Jun 2021 04:58:45 GMT
- Title: SDOF-Tracker: Fast and Accurate Multiple Human Tracking by
Skipped-Detection and Optical-Flow
- Authors: Hitoshi Nishimura, Satoshi Komorita, Yasutomo Kawanishi, Hiroshi
Murase
- Abstract summary: This study aims to improve running speed by performing human detection at a certain frame interval.
We propose a method that complements the detection results with optical flow, based on the fact that someone's appearance does not change much between adjacent frames.
On the MOT20 dataset in the MOTChallenge, the proposed SDOF-Tracker achieved the best performance in terms of the total running speed.
- Score: 5.041369269600902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiple human tracking is a fundamental problem for scene understanding.
Although both accuracy and speed are required in real-world applications,
recent tracking methods based on deep learning have focused on accuracy and
require substantial running time. This study aims to improve running speed by
performing human detection at a certain frame interval because it accounts for
most of the running time. The question is how to maintain accuracy while
skipping human detection. In this paper, we propose a method that complements
the detection results with optical flow, based on the fact that someone's
appearance does not change much between adjacent frames. To maintain the
tracking accuracy, we introduce robust interest point selection within human
regions and a tracking termination metric calculated by the distribution of the
interest points. On the MOT20 dataset in the MOTChallenge, the proposed
SDOF-Tracker achieved the best performance in terms of the total running speed
while maintaining the MOTA metric. Our code is available at
https://anonymous.4open.science/r/sdof-tracker-75AE.
Related papers
- Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking [52.04679257903805]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks.
Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks.
arXiv Detail & Related papers (2024-07-19T07:48:45Z) - Dense Optical Tracking: Connecting the Dots [82.79642869586587]
DOT is a novel, simple and efficient method for solving the problem of point tracking in a video.
We show that DOT is significantly more accurate than current optical flow techniques, outperforms sophisticated "universal trackers" like OmniMotion, and is on par with, or better than, the best point tracking algorithms like CoTracker.
arXiv Detail & Related papers (2023-12-01T18:59:59Z) - Minkowski Tracker: A Sparse Spatio-Temporal R-CNN for Joint Object
Detection and Tracking [53.64390261936975]
We present Minkowski Tracker, a sparse-temporal R-CNN that jointly solves object detection and tracking problems.
Inspired by region-based CNN (R-CNN), we propose to track motion as a second stage of the object detector R-CNN.
We show in large-scale experiments that the overall performance gain of our method is due to four factors.
arXiv Detail & Related papers (2022-08-22T04:47:40Z) - VariabilityTrack:Multi-Object Tracking with Variable Speed Object
Movement [1.6385815610837167]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos.
We propose a variable speed Kalman filter algorithm based on environmental feedback and improve the matching process.
arXiv Detail & Related papers (2022-03-12T12:39:41Z) - DeepScale: An Online Frame Size Adaptation Framework to Accelerate
Visual Multi-object Tracking [8.878656943106934]
DeepScale is a model agnostic frame size selection approach to accelerate tracking throughput.
It can find a suitable trade-off between tracking accuracy and speed by adapting frame sizes at run time.
Compared to a state-of-the-art tracker, DeepScale++, a variant of DeepScale achieves 1.57X accelerated with only moderate degradation.
arXiv Detail & Related papers (2021-07-22T00:12:58Z) - Distractor-Aware Fast Tracking via Dynamic Convolutions and MOT
Philosophy [63.91005999481061]
A practical long-term tracker typically contains three key properties, i.e. an efficient model design, an effective global re-detection strategy and a robust distractor awareness mechanism.
We propose a two-task tracking frame work (named DMTrack) to achieve distractor-aware fast tracking via Dynamic convolutions (d-convs) and Multiple object tracking (MOT) philosophy.
Our tracker achieves state-of-the-art performance on the LaSOT, OxUvA, TLP, VOT2018LT and VOT 2019LT benchmarks and runs in real-time (3x faster
arXiv Detail & Related papers (2021-04-25T00:59:53Z) - CurbScan: Curb Detection and Tracking Using Multi-Sensor Fusion [0.8722958995761769]
Curb detection and tracking are useful in vehicle localization and path planning.
We propose an approach to detect and track curbs by fusing together data from multiple sensors.
Our algorithm maintains over 90% accuracy within 4.5-22 meters and 0-14 meters for the KITTI dataset and our dataset respectively.
arXiv Detail & Related papers (2020-10-09T22:48:20Z) - Tracking-by-Counting: Using Network Flows on Crowd Density Maps for
Tracking Multiple Targets [96.98888948518815]
State-of-the-art multi-object tracking(MOT) methods follow the tracking-by-detection paradigm.
We propose a new MOT paradigm, tracking-by-counting, tailored for crowded scenes.
arXiv Detail & Related papers (2020-07-18T19:51:53Z) - Tracking Objects as Points [83.9217787335878]
We present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art.
Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame.
CenterTrack is simple, online (no peeking into the future), and real-time.
arXiv Detail & Related papers (2020-04-02T17:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.