Hy-Tracker: A Novel Framework for Enhancing Efficiency and Accuracy of
Object Tracking in Hyperspectral Videos
- URL: http://arxiv.org/abs/2311.18199v1
- Date: Thu, 30 Nov 2023 02:38:45 GMT
- Title: Hy-Tracker: A Novel Framework for Enhancing Efficiency and Accuracy of
Object Tracking in Hyperspectral Videos
- Authors: Mohammad Aminul Islam, Wangzhi Xing, Jun Zhou, Yongsheng Gao, Kuldip
K. Paliwal
- Abstract summary: We propose a novel framework called Hy-Tracker to bridge the gap between hyperspectral data and state-of-the-art object detection methods.
The framework incorporates a refined tracking module on top of YOLOv7.
The experimental results on hyperspectral benchmark datasets demonstrate the effectiveness of Hy-Tracker.
- Score: 19.733925664613093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral object tracking has recently emerged as a topic of great
interest in the remote sensing community. The hyperspectral image, with its
many bands, provides a rich source of material information of an object that
can be effectively used for object tracking. While most hyperspectral trackers
are based on detection-based techniques, no one has yet attempted to employ
YOLO for detecting and tracking the object. This is due to the presence of
multiple spectral bands, the scarcity of annotated hyperspectral videos, and
YOLO's performance limitation in managing occlusions, and distinguishing object
in cluttered backgrounds. Therefore, in this paper, we propose a novel
framework called Hy-Tracker, which aims to bridge the gap between hyperspectral
data and state-of-the-art object detection methods to leverage the strengths of
YOLOv7 for object tracking in hyperspectral videos. Hy-Tracker not only
introduces YOLOv7 but also innovatively incorporates a refined tracking module
on top of YOLOv7. The tracker refines the initial detections produced by
YOLOv7, leading to improved object-tracking performance. Furthermore, we
incorporate Kalman-Filter into the tracker, which addresses the challenges
posed by scale variation and occlusion. The experimental results on
hyperspectral benchmark datasets demonstrate the effectiveness of Hy-Tracker in
accurately tracking objects across frames.
Related papers
- SFTrack: A Robust Scale and Motion Adaptive Algorithm for Tracking Small and Fast Moving Objects [2.9803250365852443]
This paper addresses the problem of multi-object tracking in Unmanned Aerial Vehicle (UAV) footage.
It plays a critical role in various UAV applications, including traffic monitoring systems and real-time suspect tracking by the police.
We propose a new tracking strategy, which initiates the tracking of target objects from low-confidence detections.
arXiv Detail & Related papers (2024-10-26T05:09:20Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - Iterative Scale-Up ExpansionIoU and Deep Features Association for
Multi-Object Tracking in Sports [26.33239898091364]
We propose a novel online and robust multi-object tracking approach named deep ExpansionIoU (Deep-EIoU) for sports scenarios.
Unlike conventional methods, we abandon the use of the Kalman filter and leverage the iterative scale-up ExpansionIoU and deep features for robust tracking in sports scenarios.
Our proposed method demonstrates remarkable effectiveness in tracking irregular motion objects, achieving a score of 77.2% on the SportsMOT dataset and 85.4% on the SoccerNet-Tracking dataset.
arXiv Detail & Related papers (2023-06-22T17:47:08Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - Once Detected, Never Lost: Surpassing Human Performance in Offline LiDAR
based 3D Object Detection [50.959453059206446]
This paper aims for high-performance offline LiDAR-based 3D object detection.
We first observe that experienced human annotators annotate objects from a track-centric perspective.
We propose a high-performance offline detector in a track-centric perspective instead of the conventional object-centric perspective.
arXiv Detail & Related papers (2023-04-24T17:59:05Z) - OmniTracker: Unifying Object Tracking by Tracking-with-Detection [119.51012668709502]
OmniTracker is presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline.
Experiments on 7 tracking datasets, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models.
arXiv Detail & Related papers (2023-03-21T17:59:57Z) - Learning to Track Object Position through Occlusion [32.458623495840904]
Occlusion is one of the most significant challenges encountered by object detectors and trackers.
We propose a tracking-by-detection approach that builds upon the success of region based video object detectors.
Our approach achieves superior results on a dataset of furniture assembly videos collected from the internet.
arXiv Detail & Related papers (2021-06-20T22:29:46Z) - Track to Detect and Segment: An Online Multi-Object Tracker [81.15608245513208]
TraDeS is an online joint detection and tracking model, exploiting tracking clues to assist detection end-to-end.
TraDeS infers object tracking offset by a cost volume, which is used to propagate previous object features.
arXiv Detail & Related papers (2021-03-16T02:34:06Z) - Detecting Invisible People [58.49425715635312]
We re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects.
We demonstrate that current detection and tracking systems perform dramatically worse on this task.
Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks.
arXiv Detail & Related papers (2020-12-15T16:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.