Success or Failure? Analyzing Segmentation Refinement with Few-Shot Segmentation
- URL: http://arxiv.org/abs/2407.04519v1
- Date: Fri, 5 Jul 2024 14:04:25 GMT
- Title: Success or Failure? Analyzing Segmentation Refinement with Few-Shot Segmentation
- Authors: Seonghyeon Moon, Haein Kong, Muhammad Haris Khan,
- Abstract summary: We propose JFS, a method to identify the success of segmentation refinement leveraging a few-shot segmentation (FSS) model.
JFS is evaluated on the best and worst cases from SEPL to validate its effectiveness.
- Score: 9.854182420661754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The purpose of segmentation refinement is to enhance the initial coarse masks generated by segmentation algorithms. The refined masks are expected to capture the details and contours of the target objects. Research on segmentation refinement has developed as a response to the need for high-quality initial masks. However, to our knowledge, no method has been developed that can determine the success of segmentation refinement. Such a method could ensure the reliability of segmentation in applications where the outcome of the segmentation is important, and fosters innovation in image processing technologies. To address this research gap, we propose JFS~(Judging From Support-set), a method to identify the success of segmentation refinement leveraging a few-shot segmentation (FSS) model. The traditional goal of the problem in FSS is to find a target object in a query image utilizing target information given by a support set. However, in our proposed method, we use the FSS network in a novel way to assess the segmentation refinement. When there are two masks, a coarse mask and a refined mask from segmentation refinement, these two masks become support masks. The existing support mask works as a ground truth mask to judge whether the quality of the refined segmentation is more accurate than the coarse mask. We first obtained a coarse mask and refined it using SEPL (SAM Enhanced Pseduo-Labels) to get the two masks. Then, these become input to FSS model to judge whether the post-processing was successful. JFS is evaluated on the best and worst cases from SEPL to validate its effectiveness. The results showed that JFS can determine whether the SEPL is a success or not.
Related papers
- Boosting Few-Shot Semantic Segmentation Via Segment Anything Model [8.773067974503123]
In semantic segmentation, accurate prediction masks are crucial for downstream tasks such as medical image analysis and image editing.
Due to the lack of annotated data, few-shot semantic segmentation (FSS) performs poorly in predicting masks with precise contours.
We propose FSS-SAM to boost FSS methods by addressing the issue of inaccurate contour.
arXiv Detail & Related papers (2024-01-18T09:34:40Z) - Variance-insensitive and Target-preserving Mask Refinement for
Interactive Image Segmentation [68.16510297109872]
Point-based interactive image segmentation can ease the burden of mask annotation in applications such as semantic segmentation and image editing.
We introduce a novel method, Variance-Insensitive and Target-Preserving Mask Refinement to enhance segmentation quality with fewer user inputs.
Experiments on GrabCut, Berkeley, SBD, and DAVIS datasets demonstrate our method's state-of-the-art performance in interactive image segmentation.
arXiv Detail & Related papers (2023-12-22T02:31:31Z) - Mask Matching Transformer for Few-Shot Segmentation [71.32725963630837]
Mask Matching Transformer (MM-Former) is a new paradigm for the few-shot segmentation task.
First, the MM-Former follows the paradigm of decompose first and then blend, allowing our method to benefit from the advanced potential objects segmenter.
We conduct extensive experiments on the popular COCO-$20i$ and Pascal-$5i$ benchmarks.
arXiv Detail & Related papers (2022-12-05T11:00:32Z) - Discovering Object Masks with Transformers for Unsupervised Semantic
Segmentation [75.00151934315967]
MaskDistill is a novel framework for unsupervised semantic segmentation.
Our framework does not latch onto low-level image cues and is not limited to object-centric datasets.
arXiv Detail & Related papers (2022-06-13T17:59:43Z) - HMFS: Hybrid Masking for Few-Shot Segmentation [27.49000348046462]
We develop a simple, effective, and efficient approach to enhance feature masking (FM)
We compensate for the loss of fine-grained spatial details in FM technique by investigating and leveraging a complementary basic input masking method.
Experimental results on three publicly available benchmarks reveal that HMFS outperforms the current state-of-the-art methods by visible margins.
arXiv Detail & Related papers (2022-03-24T03:07:20Z) - Mask Transfiner for High-Quality Instance Segmentation [95.74244714914052]
We present Mask Transfiner for high-quality and efficient instance segmentation.
Our approach only processes detected error-prone tree nodes and self-corrects their errors in parallel.
Our code and trained models will be available at http://vis.xyz/pub/transfiner.
arXiv Detail & Related papers (2021-11-26T18:58:22Z) - Per-Pixel Classification is Not All You Need for Semantic Segmentation [184.2905747595058]
Mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks.
We propose MaskFormer, a simple mask classification model which predicts a set of binary masks.
Our method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models.
arXiv Detail & Related papers (2021-07-13T17:59:50Z) - RefineMask: Towards High-Quality Instance Segmentation with Fine-Grained
Features [53.71163467683838]
RefineMask is a new method for high-quality instance segmentation of objects and scenes.
It incorporates fine-grained features during the instance-wise segmenting process in a multi-stage manner.
It succeeds in segmenting hard cases such as bent parts of objects that are over-smoothed by most previous methods.
arXiv Detail & Related papers (2021-04-17T15:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.