CTVIS: Consistent Training for Online Video Instance Segmentation
- URL: http://arxiv.org/abs/2307.12616v1
- Date: Mon, 24 Jul 2023 08:44:25 GMT
- Title: CTVIS: Consistent Training for Online Video Instance Segmentation
- Authors: Kaining Ying, Qing Zhong, Weian Mao, Zhenhua Wang, Hao Chen, Lin
Yuanbo Wu, Yifan Liu, Chengxiang Fan, Yunzhi Zhuge, Chunhua Shen
- Abstract summary: Discrimination of instance embeddings plays a vital role in associating instances across time for online video instance segmentation (VIS)
Recent online VIS methods leverage CIs sourced from one reference frame only, which we argue is insufficient for learning highly discriminative embeddings.
We propose a simple yet effective training strategy, called Consistent Training for Online VIS (CTVIS), which devotes to aligning the training and inference pipelines.
- Score: 62.957370691452844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The discrimination of instance embeddings plays a vital role in associating
instances across time for online video instance segmentation (VIS). Instance
embedding learning is directly supervised by the contrastive loss computed upon
the contrastive items (CIs), which are sets of anchor/positive/negative
embeddings. Recent online VIS methods leverage CIs sourced from one reference
frame only, which we argue is insufficient for learning highly discriminative
embeddings. Intuitively, a possible strategy to enhance CIs is replicating the
inference phase during training. To this end, we propose a simple yet effective
training strategy, called Consistent Training for Online VIS (CTVIS), which
devotes to aligning the training and inference pipelines in terms of building
CIs. Specifically, CTVIS constructs CIs by referring inference the
momentum-averaged embedding and the memory bank storage mechanisms, and adding
noise to the relevant embeddings. Such an extension allows a reliable
comparison between embeddings of current instances and the stable
representations of historical instances, thereby conferring an advantage in
modeling VIS challenges such as occlusion, re-identification, and deformation.
Empirically, CTVIS outstrips the SOTA VIS models by up to +5.0 points on three
VIS benchmarks, including YTVIS19 (55.1% AP), YTVIS21 (50.1% AP) and OVIS
(35.5% AP). Furthermore, we find that pseudo-videos transformed from images can
train robust models surpassing fully-supervised ones.
Related papers
- UVIS: Unsupervised Video Instance Segmentation [65.46196594721545]
Videocaption instance segmentation requires classifying, segmenting, and tracking every object across video frames.
We propose UVIS, a novel Unsupervised Video Instance (UVIS) framework that can perform video instance segmentation without any video annotations or dense label-based pretraining.
Our framework consists of three essential steps: frame-level pseudo-label generation, transformer-based VIS model training, and query-based tracking.
arXiv Detail & Related papers (2024-06-11T03:05:50Z) - SMC-NCA: Semantic-guided Multi-level Contrast for Semi-supervised Temporal Action Segmentation [53.010417880335424]
Semi-supervised temporal action segmentation (SS-TA) aims to perform frame-wise classification in long untrimmed videos.
Recent studies have shown the potential of contrastive learning in unsupervised representation learning using unlabelled data.
We propose a novel Semantic-guided Multi-level Contrast scheme with a Neighbourhood-Consistency-Aware unit (SMC-NCA) to extract strong frame-wise representations.
arXiv Detail & Related papers (2023-12-19T17:26:44Z) - Offline-to-Online Knowledge Distillation for Video Instance Segmentation [13.270872063217022]
We present offline-to-online knowledge distillation (OOKD) for video instance segmentation (VIS)
Our method transfers a wealth of video knowledge from an offline model to an online model for consistent prediction.
Our method also achieves state-of-the-art performance on YTVIS-21, YTVIS-22, and OVIS datasets, with mAP scores of 46.1%, 43.6%, and 31.1%, respectively.
arXiv Detail & Related papers (2023-02-15T08:24:37Z) - A Generalized Framework for Video Instance Segmentation [49.41441806931224]
The handling of long videos with complex and occluded sequences has emerged as a new challenge in the video instance segmentation (VIS) community.
We propose a Generalized framework for VIS, namely GenVIS, that achieves state-of-the-art performance on challenging benchmarks.
We evaluate our approach on popular VIS benchmarks, achieving state-of-the-art results on YouTube-VIS 2019/2021/2022 and Occluded VIS (OVIS)
arXiv Detail & Related papers (2022-11-16T11:17:19Z) - STC: Spatio-Temporal Contrastive Learning for Video Instance
Segmentation [47.28515170195206]
Video Instance (VIS) is a task that simultaneously requires classification, segmentation, and instance association in a video.
Recent VIS approaches rely on sophisticated pipelines to achieve this goal, including RoI-related operations or 3D convolutions.
We present a simple and efficient single-stage VIS framework based on the instance segmentation method ConInst.
arXiv Detail & Related papers (2022-02-08T09:34:26Z) - Crossover Learning for Fast Online Video Instance Segmentation [53.5613957875507]
We present a novel crossover learning scheme that uses the instance feature in the current frame to pixel-wisely localize the same instance in other frames.
To our knowledge, CrossVIS achieves state-of-the-art performance among all online VIS methods and shows a decent trade-off between latency and accuracy.
arXiv Detail & Related papers (2021-04-13T06:47:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.