Beyond Visual Cues: Synchronously Exploring Target-Centric Semantics for
Vision-Language Tracking
- URL: http://arxiv.org/abs/2311.17085v2
- Date: Mon, 19 Feb 2024 10:32:13 GMT
- Title: Beyond Visual Cues: Synchronously Exploring Target-Centric Semantics for
Vision-Language Tracking
- Authors: Jiawei Ge, Xiangmei Chen, Jiuxin Cao, Xuelin Zhu, Bo Liu
- Abstract summary: Single object tracking aims to locate one specific target in video sequences, given its initial state. Vision-Language (VL) tracking has emerged as a promising approach.
We present a novel tracker that progressively explores target-centric semantics for VL tracking.
- Score: 3.416427651955299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single object tracking aims to locate one specific target in video sequences,
given its initial state. Classical trackers rely solely on visual cues,
restricting their ability to handle challenges such as appearance variations,
ambiguity, and distractions. Hence, Vision-Language (VL) tracking has emerged
as a promising approach, incorporating language descriptions to directly
provide high-level semantics and enhance tracking performance. However, current
VL trackers have not fully exploited the power of VL learning, as they suffer
from limitations such as heavily relying on off-the-shelf backbones for feature
extraction, ineffective VL fusion designs, and the absence of VL-related loss
functions. Consequently, we present a novel tracker that progressively explores
target-centric semantics for VL tracking. Specifically, we propose the first
Synchronous Learning Backbone (SLB) for VL tracking, which consists of two
novel modules: the Target Enhance Module (TEM) and the Semantic Aware Module
(SAM). These modules enable the tracker to perceive target-related semantics
and comprehend the context of both visual and textual modalities at the same
pace, facilitating VL feature extraction and fusion at different semantic
levels. Moreover, we devise the dense matching loss to further strengthen
multi-modal representation learning. Extensive experiments on VL tracking
datasets demonstrate the superiority and effectiveness of our methods.
Related papers
- DTLLM-VLT: Diverse Text Generation for Visual Language Tracking Based on LLM [23.551036494221222]
Visual Language Tracking (VLT) enhances single object tracking (SOT) by integrating natural language descriptions from a video, for the precise tracking of a specified object.
Most VLT benchmarks are annotated in a single granularity and lack a coherent semantic framework to provide scientific guidance.
We introduce DTLLM-VLT, which automatically generates extensive and multi-granularity text to enhance environmental diversity.
arXiv Detail & Related papers (2024-05-20T16:01:01Z) - Unifying Visual and Vision-Language Tracking via Contrastive Learning [34.49865598433915]
Single object tracking aims to locate the target object in a video sequence according to different modal references.
Due to the gap between different modalities, most existing trackers are designed for single or partial of these reference settings.
We present a unified tracker called UVLTrack, which can simultaneously handle all three reference settings.
arXiv Detail & Related papers (2024-01-20T13:20:54Z) - Towards Unified Token Learning for Vision-Language Tracking [65.96561538356315]
We present a vision-language (VL) tracking pipeline, termed textbfMMTrack, which casts VL tracking as a token generation task.
Our proposed framework serializes language description and bounding box into a sequence of discrete tokens.
In this new design paradigm, all token queries are required to perceive the desired target and directly predict spatial coordinates of the target.
arXiv Detail & Related papers (2023-08-27T13:17:34Z) - Divert More Attention to Vision-Language Object Tracking [87.31882921111048]
We argue that the lack of large-scale vision-language annotated videos and ineffective vision-language interaction learning motivate us to design more effective vision-language representation for tracking.
Particularly, in this paper, we first propose a general attribute annotation strategy to decorate videos in six popular tracking benchmarks, which contributes a large-scale vision-language tracking database with more than 23,000 videos.
We then introduce a novel framework to improve tracking by learning a unified-adaptive VL representation, where the cores are the proposed asymmetric architecture search and modality mixer (ModaMixer)
arXiv Detail & Related papers (2023-07-19T15:22:06Z) - All in One: Exploring Unified Vision-Language Tracking with Multi-Modal
Alignment [23.486297020327257]
Current vision-language (VL) tracking framework consists of three parts, ie a visual feature extractor, a language feature extractor, and a fusion model.
We propose an All-in-One framework, which learns joint feature extraction and interaction by adopting a unified transformer backbone.
arXiv Detail & Related papers (2023-07-07T03:51:21Z) - GLIPv2: Unifying Localization and Vision-Language Understanding [161.1770269829139]
We present GLIPv2, a grounded VL understanding model, that serves both localization tasks and Vision-Language (VL) understanding tasks.
GLIPv2 unifies localization pre-training and Vision-Language Pre-training with three pre-training tasks.
We show that a single GLIPv2 model achieves near SoTA performance on various localization and understanding tasks.
arXiv Detail & Related papers (2022-06-12T20:31:28Z) - PEVL: Position-enhanced Pre-training and Prompt Tuning for
Vision-language Models [127.17675443137064]
We introduce PEVL, which enhances the pre-training and prompt tuning of vision-language models with explicit object position modeling.
PEVL reformulates discretized object positions and language in a unified language modeling framework.
We show that PEVL enables state-of-the-art performance on position-sensitive tasks such as referring expression comprehension and phrase grounding.
arXiv Detail & Related papers (2022-05-23T10:17:53Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Self-supervised Video Object Segmentation [76.83567326586162]
The objective of this paper is self-supervised representation learning, with the goal of solving semi-supervised video object segmentation (a.k.a. dense tracking)
We make the following contributions: (i) we propose to improve the existing self-supervised approach, with a simple, yet more effective memory mechanism for long-term correspondence matching; (ii) by augmenting the self-supervised approach with an online adaptation module, our method successfully alleviates tracker drifts caused by spatial-temporal discontinuity; (iv) we demonstrate state-of-the-art results among the self-supervised approaches on DAVIS-2017 and YouTube
arXiv Detail & Related papers (2020-06-22T17:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.