End-to-end Deep Object Tracking with Circular Loss Function for Rotated
Bounding Box
- URL: http://arxiv.org/abs/2012.09771v1
- Date: Thu, 17 Dec 2020 17:29:29 GMT
- Title: End-to-end Deep Object Tracking with Circular Loss Function for Rotated
Bounding Box
- Authors: Vladislav Belyaev, Aleksandra Malysheva, Aleksei Shpilman
- Abstract summary: We introduce a novel end-to-end deep learning method based on the Transformer Multi-Head Attention architecture.
We also present a new type of loss function, which takes into account the bounding box overlap and orientation.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task object tracking is vital in numerous applications such as autonomous
driving, intelligent surveillance, robotics, etc. This task entails the
assigning of a bounding box to an object in a video stream, given only the
bounding box for that object on the first frame. In 2015, a new type of video
object tracking (VOT) dataset was created that introduced rotated bounding
boxes as an extension of axis-aligned ones. In this work, we introduce a novel
end-to-end deep learning method based on the Transformer Multi-Head Attention
architecture. We also present a new type of loss function, which takes into
account the bounding box overlap and orientation.
Our Deep Object Tracking model with Circular Loss Function (DOTCL) shows an
considerable improvement in terms of robustness over current state-of-the-art
end-to-end deep learning models. It also outperforms state-of-the-art object
tracking methods on VOT2018 dataset in terms of expected average overlap (EAO)
metric.
Related papers
- Zero-Shot Open-Vocabulary Tracking with Large Pre-Trained Models [28.304047711166056]
Large-scale pre-trained models have shown promising advances in detecting and segmenting objects in 2D static images in the wild.
This begs the question: can we re-purpose these large-scale pre-trained static image models for open-vocabulary video tracking?
In this paper, we re-purpose an open-vocabulary detector, segmenter, and dense optical flow estimator, into a model that tracks and segments objects of any category in 2D videos.
arXiv Detail & Related papers (2023-10-10T20:25:30Z) - UnsMOT: Unified Framework for Unsupervised Multi-Object Tracking with
Geometric Topology Guidance [6.577227592760559]
UnsMOT is a novel framework that combines appearance and motion features of objects with geometric information to provide more accurate tracking.
Experimental results show remarkable performance in terms of HOTA, IDF1, and MOTA metrics in comparison with state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T04:58:12Z) - TrajectoryFormer: 3D Object Tracking Transformer with Predictive
Trajectory Hypotheses [51.60422927416087]
3D multi-object tracking (MOT) is vital for many applications including autonomous driving vehicles and service robots.
We present TrajectoryFormer, a novel point-cloud-based 3D MOT framework.
arXiv Detail & Related papers (2023-06-09T13:31:50Z) - Contrastive Lift: 3D Object Instance Segmentation by Slow-Fast
Contrastive Fusion [110.84357383258818]
We propose a novel approach to lift 2D segments to 3D and fuse them by means of a neural field representation.
The core of our approach is a slow-fast clustering objective function, which is scalable and well-suited for scenes with a large number of objects.
Our approach outperforms the state-of-the-art on challenging scenes from the ScanNet, Hypersim, and Replica datasets.
arXiv Detail & Related papers (2023-06-07T17:57:45Z) - Once Detected, Never Lost: Surpassing Human Performance in Offline LiDAR
based 3D Object Detection [50.959453059206446]
This paper aims for high-performance offline LiDAR-based 3D object detection.
We first observe that experienced human annotators annotate objects from a track-centric perspective.
We propose a high-performance offline detector in a track-centric perspective instead of the conventional object-centric perspective.
arXiv Detail & Related papers (2023-04-24T17:59:05Z) - OPA-3D: Occlusion-Aware Pixel-Wise Aggregation for Monocular 3D Object
Detection [51.153003057515754]
OPA-3D is a single-stage, end-to-end, Occlusion-Aware Pixel-Wise Aggregation network.
It jointly estimates dense scene depth with depth-bounding box residuals and object bounding boxes.
It outperforms state-of-the-art methods on the main Car category.
arXiv Detail & Related papers (2022-11-02T14:19:13Z) - RLM-Tracking: Online Multi-Pedestrian Tracking Supported by Relative
Location Mapping [5.9669075749248774]
Problem of multi-object tracking is a fundamental computer vision research focus, widely used in public safety, transport, autonomous vehicles, robotics, and other regions involving artificial intelligence.
In this paper, we design a new multi-object tracker for the above issues that contains an object textbfRelative Location Mapping (RLM) model and textbfTarget Region Density (TRD) model.
The new tracker is more sensitive to the differences in position relationships between objects.
It can introduce low-score detection frames into different regions in real-time according to the density of object
arXiv Detail & Related papers (2022-10-19T11:37:14Z) - Recent Trends in 2D Object Detection and Applications in Video Event
Recognition [0.76146285961466]
We discuss the pioneering works in object detection, followed by the recent breakthroughs that employ deep learning.
We highlight recent datasets for 2D object detection both in images and videos, and present a comparative performance summary of various state-of-the-art object detection techniques.
arXiv Detail & Related papers (2022-02-07T14:15:11Z) - Learning to Track with Object Permanence [61.36492084090744]
We introduce an end-to-end trainable approach for joint object detection and tracking.
Our model, trained jointly on synthetic and real data, outperforms the state of the art on KITTI, and MOT17 datasets.
arXiv Detail & Related papers (2021-03-26T04:43:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.