Variance-insensitive and Target-preserving Mask Refinement for
Interactive Image Segmentation
- URL: http://arxiv.org/abs/2312.14387v1
- Date: Fri, 22 Dec 2023 02:31:31 GMT
- Title: Variance-insensitive and Target-preserving Mask Refinement for
Interactive Image Segmentation
- Authors: Chaowei Fang, Ziyin Zhou, Junye Chen, Hanjing Su, Qingyao Wu, Guanbin
Li
- Abstract summary: Point-based interactive image segmentation can ease the burden of mask annotation in applications such as semantic segmentation and image editing.
We introduce a novel method, Variance-Insensitive and Target-Preserving Mask Refinement to enhance segmentation quality with fewer user inputs.
Experiments on GrabCut, Berkeley, SBD, and DAVIS datasets demonstrate our method's state-of-the-art performance in interactive image segmentation.
- Score: 68.16510297109872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point-based interactive image segmentation can ease the burden of mask
annotation in applications such as semantic segmentation and image editing.
However, fully extracting the target mask with limited user inputs remains
challenging. We introduce a novel method, Variance-Insensitive and
Target-Preserving Mask Refinement to enhance segmentation quality with fewer
user inputs. Regarding the last segmentation result as the initial mask, an
iterative refinement process is commonly employed to continually enhance the
initial mask. Nevertheless, conventional techniques suffer from sensitivity to
the variance in the initial mask. To circumvent this problem, our proposed
method incorporates a mask matching algorithm for ensuring consistent
inferences from different types of initial masks. We also introduce a
target-aware zooming algorithm to preserve object information during
downsampling, balancing efficiency and accuracy. Experiments on GrabCut,
Berkeley, SBD, and DAVIS datasets demonstrate our method's state-of-the-art
performance in interactive image segmentation.
Related papers
- ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders [53.3185750528969]
Masked AutoEncoders (MAE) have emerged as a robust self-supervised framework.
We introduce a data-independent method, termed ColorMAE, which generates different binary mask patterns by filtering random noise.
We demonstrate our strategy's superiority in downstream tasks compared to random masking.
arXiv Detail & Related papers (2024-07-17T22:04:00Z) - On Mask-based Image Set Desensitization with Recognition Support [46.51027529020668]
We propose a mask-based image desensitization approach while supporting recognition.
We exploit an interpretation algorithm to maintain critical information for the recognition task.
In addition, we propose a feature selection masknet as the model adjustment method to improve the performance based on the masked images.
arXiv Detail & Related papers (2023-12-14T14:26:42Z) - Completing Visual Objects via Bridging Generation and Segmentation [84.4552458720467]
MaskComp delineates the completion process through iterative stages of generation and segmentation.
In each iteration, the object mask is provided as an additional condition to boost image generation.
We demonstrate that the combination of one generation and one segmentation stage effectively functions as a mask denoiser.
arXiv Detail & Related papers (2023-10-01T22:25:40Z) - Mask2Anomaly: Mask Transformer for Universal Open-set Segmentation [29.43462426812185]
We propose a paradigm change by shifting from a per-pixel classification to a mask classification.
Our mask-based method, Mask2Anomaly, demonstrates the feasibility of integrating a mask-classification architecture.
By comprehensive qualitative and qualitative evaluation, we show Mask2Anomaly achieves new state-of-the-art results.
arXiv Detail & Related papers (2023-09-08T20:07:18Z) - Unmasking Anomalies in Road-Scene Segmentation [18.253109627901566]
Anomaly segmentation is a critical task for driving applications.
We propose a paradigm change by shifting from a per-pixel classification to a mask classification.
Mask2Anomaly demonstrates the feasibility of integrating an anomaly detection method in a mask-classification architecture.
arXiv Detail & Related papers (2023-07-25T08:23:10Z) - Few-shot semantic segmentation via mask aggregation [5.886986014593717]
Few-shot semantic segmentation aims to recognize novel classes with only very few labelled data.
Previous works have typically regarded it as a pixel-wise classification problem.
We introduce a mask-based classification method for addressing this problem.
arXiv Detail & Related papers (2022-02-15T07:13:09Z) - Open-Vocabulary Instance Segmentation via Robust Cross-Modal
Pseudo-Labeling [61.03262873980619]
Open-vocabulary instance segmentation aims at segmenting novel classes without mask annotations.
We propose a cross-modal pseudo-labeling framework, which generates training pseudo masks by aligning word semantics in captions with visual features of object masks in images.
Our framework is capable of labeling novel classes in captions via their word semantics to self-train a student model.
arXiv Detail & Related papers (2021-11-24T18:50:47Z) - Per-Pixel Classification is Not All You Need for Semantic Segmentation [184.2905747595058]
Mask classification is sufficiently general to solve both semantic- and instance-level segmentation tasks.
We propose MaskFormer, a simple mask classification model which predicts a set of binary masks.
Our method outperforms both current state-of-the-art semantic (55.6 mIoU on ADE20K) and panoptic segmentation (52.7 PQ on COCO) models.
arXiv Detail & Related papers (2021-07-13T17:59:50Z) - Proposal-Free Volumetric Instance Segmentation from Latent
Single-Instance Masks [16.217524435617744]
This work introduces a new proposal-free instance segmentation method.
It builds on single-instance segmentation masks predicted across the entire image in a sliding window style.
In contrast to related approaches, our method concurrently predicts all masks, one for each pixel, and thus resolves any conflict jointly across the entire image.
arXiv Detail & Related papers (2020-09-10T17:09:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.