PaintSeg: Training-free Segmentation via Painting
- URL: http://arxiv.org/abs/2305.19406v3
- Date: Sun, 4 Jun 2023 17:05:56 GMT
- Title: PaintSeg: Training-free Segmentation via Painting
- Authors: Xiang Li, Chung-Ching Lin, Yinpeng Chen, Zicheng Liu, Jinglu Wang,
Bhiksha Raj
- Abstract summary: PaintSeg is a new unsupervised method for segmenting objects without any training.
Inpainting and outpainting are alternated, with the former masking the foreground and filling in the background, and the latter masking the background while recovering the missing part of the foreground object.
Our experimental results demonstrate that PaintSeg outperforms existing approaches in coarse mask-prompt, box-prompt, and point-prompt segmentation tasks.
- Score: 50.17936803209125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper introduces PaintSeg, a new unsupervised method for segmenting
objects without any training. We propose an adversarial masked contrastive
painting (AMCP) process, which creates a contrast between the original image
and a painted image in which a masked area is painted using off-the-shelf
generative models. During the painting process, inpainting and outpainting are
alternated, with the former masking the foreground and filling in the
background, and the latter masking the background while recovering the missing
part of the foreground object. Inpainting and outpainting, also referred to as
I-step and O-step, allow our method to gradually advance the target
segmentation mask toward the ground truth without supervision or training.
PaintSeg can be configured to work with a variety of prompts, e.g. coarse
masks, boxes, scribbles, and points. Our experimental results demonstrate that
PaintSeg outperforms existing approaches in coarse mask-prompt, box-prompt, and
point-prompt segmentation tasks, providing a training-free solution suitable
for unsupervised segmentation.
Related papers
- Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis [63.757624792753205]
We present Zero-Painter, a framework for layout-conditional text-to-image synthesis.
Our method utilizes object masks and individual descriptions, coupled with a global text prompt, to generate images with high fidelity.
arXiv Detail & Related papers (2024-06-06T13:02:00Z) - Sketch-guided Image Inpainting with Partial Discrete Diffusion Process [5.005162730122933]
We introduce a novel partial discrete diffusion process (PDDP) for sketch-guided inpainting.
PDDP corrupts the masked regions of the image and reconstructs these masked regions conditioned on hand-drawn sketches.
The proposed novel transformer module accepts two inputs -- the image containing the masked region to be inpainted and the query sketch to model the reverse diffusion process.
arXiv Detail & Related papers (2024-04-18T07:07:38Z) - Inpainting-Driven Mask Optimization for Object Removal [15.429649454099085]
This paper proposes a mask optimization method for improving the quality of object removal using image inpainting.
In our method, this domain gap is resolved by training the inpainting network with object masks extracted by segmentation.
To optimize the object masks for inpainting, the segmentation network is connected to the inpainting network and end-to-end trained to improve the inpainting performance.
arXiv Detail & Related papers (2024-03-23T13:52:16Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - DIFAI: Diverse Facial Inpainting using StyleGAN Inversion [18.400846952014188]
We propose a novel framework for diverse facial inpainting exploiting the embedding space of StyleGAN.
Our framework employs pSp encoder and SeFa algorithm to identify semantic components of the StyleGAN embeddings and feed them into our proposed SPARN decoder.
arXiv Detail & Related papers (2023-01-20T06:51:34Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z) - Shape-Aware Masking for Inpainting in Medical Imaging [49.61617087640379]
Inpainting has been proposed as a successful deep learning technique for unsupervised medical image model discovery.
We introduce a method for generating shape-aware masks for inpainting, which aims at learning the statistical shape prior.
We propose an unsupervised guided masking approach based on an off-the-shelf inpainting model and a superpixel over-segmentation algorithm.
arXiv Detail & Related papers (2022-07-12T18:35:17Z) - RePaint: Inpainting using Denoising Diffusion Probabilistic Models [161.74792336127345]
Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask.
We propose RePaint: A Denoising Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks.
We validate our method for both faces and general-purpose image inpainting using standard and extreme masks.
arXiv Detail & Related papers (2022-01-24T18:40:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.