GoMatching: A Simple Baseline for Video Text Spotting via Long and Short
Term Matching
- URL: http://arxiv.org/abs/2401.07080v1
- Date: Sat, 13 Jan 2024 13:59:15 GMT
- Title: GoMatching: A Simple Baseline for Video Text Spotting via Long and Short
Term Matching
- Authors: Haibin He, Maoyuan Ye, Jing Zhang, Juhua Liu, Dacheng Tao
- Abstract summary: Video text spotting presents an augmented challenge with the inclusion of tracking.
GoMatching focuses the training efforts on tracking while maintaining strong recognition performance.
We set a new record on the ICDAR15-video dataset, and one novel test set with arbitrary-shaped text.
- Score: 63.92600699525989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Beyond the text detection and recognition tasks in image text spotting, video
text spotting presents an augmented challenge with the inclusion of tracking.
While advanced end-to-end trainable methods have shown commendable performance,
the pursuit of multi-task optimization may pose the risk of producing
sub-optimal outcomes for individual tasks. In this paper, we highlight a main
bottleneck in the state-of-the-art video text spotter: the limited recognition
capability. In response to this issue, we propose to efficiently turn an
off-the-shelf query-based image text spotter into a specialist on video and
present a simple baseline termed GoMatching, which focuses the training efforts
on tracking while maintaining strong recognition performance. To adapt the
image text spotter to video datasets, we add a rescoring head to rescore each
detected instance's confidence via efficient tuning, leading to a better
tracking candidate pool. Additionally, we design a long-short term matching
module, termed LST-Matcher, to enhance the spotter's tracking capability by
integrating both long- and short-term matching results via Transformer. Based
on the above simple designs, GoMatching achieves impressive performance on two
public benchmarks, e.g., setting a new record on the ICDAR15-video dataset, and
one novel test set with arbitrary-shaped text, while saving considerable
training budgets. The code will be released at
https://github.com/Hxyz-123/GoMatching.
Related papers
- Autogenic Language Embedding for Coherent Point Tracking [19.127052469203612]
We introduce a novel approach leveraging language embeddings to enhance the coherence of frame-wise visual features related to the same object.
Unlike existing visual-language schemes, our approach learns text embeddings from visual features through a dedicated mapping network.
Our approach significantly improves tracking trajectories in lengthy videos with substantial appearance variations.
arXiv Detail & Related papers (2024-07-30T11:02:45Z) - LOGO: Video Text Spotting with Language Collaboration and Glyph Perception Model [20.007650672107566]
Video text spotting (VTS) aims to simultaneously localize, recognize and track text instances in videos.
Recent methods track the zero-shot results of state-of-the-art image text spotters directly.
Fine-tuning transformer-based text spotters on specific datasets could yield performance enhancements.
arXiv Detail & Related papers (2024-05-29T15:35:09Z) - Text-Conditioned Resampler For Long Form Video Understanding [94.81955667020867]
We present a text-conditioned video resampler (TCR) module that uses a pre-trained visual encoder and large language model (LLM)
TCR can process more than 100 frames at a time with plain attention and without optimised implementations.
arXiv Detail & Related papers (2023-12-19T06:42:47Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - Text-based Person Search without Parallel Image-Text Data [52.63433741872629]
Text-based person search (TBPS) aims to retrieve the images of the target person from a large image gallery based on a given natural language description.
Existing methods are dominated by training models with parallel image-text pairs, which are very costly to collect.
In this paper, we make the first attempt to explore TBPS without parallel image-text data.
arXiv Detail & Related papers (2023-05-22T12:13:08Z) - DeepSolo: Let Transformer Decoder with Explicit Points Solo for Text
Spotting [129.73247700864385]
DeepSolo is a simple detection transformer baseline that lets a single Decoder with Explicit Points Solo for text detection and recognition simultaneously.
We introduce a text-matching criterion to deliver more accurate supervisory signals, thus enabling more efficient training.
arXiv Detail & Related papers (2022-11-19T19:06:22Z) - Prompting Visual-Language Models for Efficient Video Understanding [28.754997650215486]
This paper presents a simple method to efficiently adapt one pre-trained visual-language model to novel tasks with minimal training.
To bridge the gap between static images and videos, temporal information is encoded with lightweight Transformers stacking on top of frame-wise visual features.
arXiv Detail & Related papers (2021-12-08T18:58:16Z) - Text-based Person Search in Full Images via Semantic-Driven Proposal
Generation [42.25611020956918]
We propose a new end-to-end learning framework which jointly optimize the pedestrian detection, identification and visual-semantic feature embedding tasks.
To take full advantage of the query text, the semantic features are leveraged to instruct the Region Proposal Network to pay more attention to the text-described proposals.
arXiv Detail & Related papers (2021-09-27T11:42:40Z) - TEACHTEXT: CrossModal Generalized Distillation for Text-Video Retrieval [103.85002875155551]
We propose a novel generalized distillation method, TeachText, for exploiting large-scale language pretraining.
We extend our method to video side modalities and show that we can effectively reduce the number of used modalities at test time.
Our approach advances the state of the art on several video retrieval benchmarks by a significant margin and adds no computational overhead at test time.
arXiv Detail & Related papers (2021-04-16T17:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.