Efficient Joint Detection and Multiple Object Tracking with Spatially
Aware Transformer
- URL: http://arxiv.org/abs/2211.05654v1
- Date: Wed, 9 Nov 2022 07:19:33 GMT
- Title: Efficient Joint Detection and Multiple Object Tracking with Spatially
Aware Transformer
- Authors: Siddharth Sagar Nijhawan, Leo Hoshikawa, Atsushi Irie, Masakazu
Yoshimura, Junji Otsuka, Takeshi Ohashi
- Abstract summary: We propose a light-weight and highly efficient Joint Detection and Tracking pipeline for the task of Multi-Object Tracking.
It is driven by a transformer based backbone instead of CNN, which is highly scalable with the input resolution.
As a result of our modifications, we reduce the overall model size of TransTrack by 58.73% and the complexity by 78.72%.
- Score: 0.8808021343665321
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a light-weight and highly efficient Joint Detection and Tracking
pipeline for the task of Multi-Object Tracking using a fully-transformer
architecture. It is a modified version of TransTrack, which overcomes the
computational bottleneck associated with its design, and at the same time,
achieves state-of-the-art MOTA score of 73.20%. The model design is driven by a
transformer based backbone instead of CNN, which is highly scalable with the
input resolution. We also propose a drop-in replacement for Feed Forward
Network of transformer encoder layer, by using Butterfly Transform Operation to
perform channel fusion and depth-wise convolution to learn spatial context
within the feature maps, otherwise missing within the attention maps of the
transformer. As a result of our modifications, we reduce the overall model size
of TransTrack by 58.73% and the complexity by 78.72%. Therefore, we expect our
design to provide novel perspectives for architecture optimization in future
research related to multi-object tracking.
Related papers
- CT-MVSNet: Efficient Multi-View Stereo with Cross-scale Transformer [8.962657021133925]
Cross-scale transformer (CT) processes feature representations at different stages without additional computation.
We introduce an adaptive matching-aware transformer (AMT) that employs different interactive attention combinations at multiple scales.
We also present a dual-feature guided aggregation (DFGA) that embeds the coarse global semantic information into the finer cost volume construction.
arXiv Detail & Related papers (2023-12-14T01:33:18Z) - SGDViT: Saliency-Guided Dynamic Vision Transformer for UAV Tracking [12.447854608181833]
This work presents a novel saliency-guided dynamic vision Transformer (SGDViT) for UAV tracking.
The proposed method designs a new task-specific object saliency mining network to refine the cross-correlation operation.
A lightweight saliency filtering Transformer further refines saliency information and increases the focus on appearance information.
arXiv Detail & Related papers (2023-03-08T05:01:00Z) - Strong-TransCenter: Improved Multi-Object Tracking based on Transformers
with Dense Representations [1.2891210250935146]
TransCenter is a transformer-based MOT architecture with dense object queries for accurately tracking all the objects.
This paper shows an improvement to this tracker using post processing mechanism based in the Track-by-Detection paradigm.
Our new tracker shows significant improvements in the IDF1 and HOTA metrics and comparable results on the MOTA metric.
arXiv Detail & Related papers (2022-10-24T19:47:58Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - Rich CNN-Transformer Feature Aggregation Networks for Super-Resolution [50.10987776141901]
Recent vision transformers along with self-attention have achieved promising results on various computer vision tasks.
We introduce an effective hybrid architecture for super-resolution (SR) tasks, which leverages local features from CNNs and long-range dependencies captured by transformers.
Our proposed method achieves state-of-the-art SR results on numerous benchmark datasets.
arXiv Detail & Related papers (2022-03-15T06:52:25Z) - Efficient Visual Tracking with Exemplar Transformers [98.62550635320514]
We introduce the Exemplar Transformer, an efficient transformer for real-time visual object tracking.
E.T.Track, our visual tracker that incorporates Exemplar Transformer layers, runs at 47 fps on a CPU.
This is up to 8 times faster than other transformer-based models.
arXiv Detail & Related papers (2021-12-17T18:57:54Z) - Siamese Transformer Pyramid Networks for Real-Time UAV Tracking [3.0969191504482243]
We introduce the Siamese Transformer Pyramid Network (SiamTPN), which inherits the advantages from both CNN and Transformer architectures.
Experiments on both aerial and prevalent tracking benchmarks achieve competitive results while operating at high speed.
Our fastest variant tracker operates over 30 Hz on a single CPU-core and obtaining an AUC score of 58.1% on the LaSOT dataset.
arXiv Detail & Related papers (2021-10-17T13:48:31Z) - ViDT: An Efficient and Effective Fully Transformer-based Object Detector [97.71746903042968]
Detection transformers are the first fully end-to-end learning systems for object detection.
vision transformers are the first fully transformer-based architecture for image classification.
In this paper, we integrate Vision and Detection Transformers (ViDT) to build an effective and efficient object detector.
arXiv Detail & Related papers (2021-10-08T06:32:05Z) - TransMOT: Spatial-Temporal Graph Transformer for Multiple Object
Tracking [74.82415271960315]
We propose a solution named TransMOT to efficiently model the spatial and temporal interactions among objects in a video.
TransMOT is not only more computationally efficient than the traditional Transformer, but it also achieves better tracking accuracy.
The proposed method is evaluated on multiple benchmark datasets including MOT15, MOT16, MOT17, and MOT20.
arXiv Detail & Related papers (2021-04-01T01:49:05Z) - Transformers Solve the Limited Receptive Field for Monocular Depth
Prediction [82.90445525977904]
We propose TransDepth, an architecture which benefits from both convolutional neural networks and transformers.
This is the first paper which applies transformers into pixel-wise prediction problems involving continuous labels.
arXiv Detail & Related papers (2021-03-22T18:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.