Segment, Select, Correct: A Framework for Weakly-Supervised Referring
Segmentation
- URL: http://arxiv.org/abs/2310.13479v2
- Date: Mon, 23 Oct 2023 09:42:29 GMT
- Title: Segment, Select, Correct: A Framework for Weakly-Supervised Referring
Segmentation
- Authors: Francisco Eiras, Kemal Oksuz, Adel Bibi, Philip H.S. Torr, Puneet K.
Dokania
- Abstract summary: Referring Image (RIS) is the problem of identifying objects in images through natural language sentences.
We propose a novel weakly-supervised framework that tackles RIS by decomposing it into three steps.
Using only the first two steps (zero-shot segment and select) outperforms other zero-shot baselines by as much as 19%.
- Score: 67.73558686629998
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Referring Image Segmentation (RIS) - the problem of identifying objects in
images through natural language sentences - is a challenging task currently
mostly solved through supervised learning. However, while collecting referred
annotation masks is a time-consuming process, the few existing
weakly-supervised and zero-shot approaches fall significantly short in
performance compared to fully-supervised learning ones. To bridge the
performance gap without mask annotations, we propose a novel weakly-supervised
framework that tackles RIS by decomposing it into three steps: obtaining
instance masks for the object mentioned in the referencing instruction
(segment), using zero-shot learning to select a potentially correct mask for
the given instruction (select), and bootstrapping a model which allows for
fixing the mistakes of zero-shot selection (correct). In our experiments, using
only the first two steps (zero-shot segment and select) outperforms other
zero-shot baselines by as much as 19%, while our full method improves upon this
much stronger baseline and sets the new state-of-the-art for weakly-supervised
RIS, reducing the gap between the weakly-supervised and fully-supervised
methods in some cases from around 33% to as little as 14%. Code is available at
https://github.com/fgirbal/segment-select-correct.
Related papers
- Semantic Refocused Tuning for Open-Vocabulary Panoptic Segmentation [42.020470627552136]
Open-vocabulary panoptic segmentation is an emerging task aiming to accurately segment the image into semantically meaningful masks.
mask classification is the main performance bottleneck for open-vocab panoptic segmentation.
We propose Semantic Refocused Tuning, a novel framework that greatly enhances open-vocab panoptic segmentation.
arXiv Detail & Related papers (2024-09-24T17:50:28Z) - SafaRi:Adaptive Sequence Transformer for Weakly Supervised Referring Expression Segmentation [11.243400478302771]
Referring Expression Consistency (RES) aims to provide a segmentation mask of the target object in an image referred to by the text.
We propose a weakly-supervised bootstrapping architecture for RES with several new algorithmic innovations.
arXiv Detail & Related papers (2024-07-02T16:02:25Z) - Transductive Zero-Shot and Few-Shot CLIP [24.592841797020203]
This paper addresses the transductive zero-shot and few-shot CLIP classification challenge.
Inference is performed jointly across a mini-batch of unlabeled query samples, rather than treating each instance independently.
Our approach yields near 20% improvement in ImageNet accuracy over CLIP's zero-shot performance.
arXiv Detail & Related papers (2024-04-08T12:44:31Z) - Open-Vocabulary Segmentation with Unpaired Mask-Text Supervision [87.15580604023555]
Unpair-Seg is a novel weakly-supervised open-vocabulary segmentation framework.
It learns from unpaired image-mask and image-text pairs, which can be independently and efficiently collected.
It achieves 14.6% and 19.5% mIoU on the ADE-847 and PASCAL Context-459 datasets.
arXiv Detail & Related papers (2024-02-14T06:01:44Z) - Learning Mask-aware CLIP Representations for Zero-Shot Segmentation [120.97144647340588]
Mask-awareProposals CLIP (IP-CLIP) is proposed to handle arbitrary numbers of image and mask proposals simultaneously.
mask-aware loss and self-distillation loss are designed to fine-tune IP-CLIP, ensuring CLIP is responsive to different mask proposals.
We conduct extensive experiments on the popular zero-shot benchmarks.
arXiv Detail & Related papers (2023-09-30T03:27:31Z) - Diffuse, Attend, and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion [24.02235805999193]
We propose a model capable of segmenting anything in a zero-shot manner without any annotations.
We introduce a simple yet effective iterative merging process based on measuring KL divergence among attention maps to merge them into valid segmentation masks.
On COCO-Stuff-27, our method surpasses the prior unsupervised zero-shot SOTA method by an absolute 26% in pixel accuracy and 17% in mean IoU.
arXiv Detail & Related papers (2023-08-23T23:44:44Z) - Cut and Learn for Unsupervised Object Detection and Instance
Segmentation [65.43627672225624]
Cut-and-LEaRn (CutLER) is a simple approach for training unsupervised object detection and segmentation models.
CutLER is a zero-shot unsupervised detector and improves detection performance AP50 by over 2.7 times on 11 benchmarks.
arXiv Detail & Related papers (2023-01-26T18:57:13Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z) - MaskSplit: Self-supervised Meta-learning for Few-shot Semantic
Segmentation [10.809349710149533]
We propose a self-supervised training approach for learning few-shot segmentation models.
We first use unsupervised saliency estimation to obtain pseudo-masks on images.
We then train a simple prototype based model over different splits of pseudo masks and augmentations of images.
arXiv Detail & Related papers (2021-10-23T12:30:05Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.