Cartoon-texture evolution for two-region image segmentation
- URL: http://arxiv.org/abs/2203.03513v1
- Date: Mon, 7 Mar 2022 16:50:01 GMT
- Title: Cartoon-texture evolution for two-region image segmentation
- Authors: Laura Antonelli, Valentina De Simone, Marco Viola
- Abstract summary: Two-region image segmentation is a process of dividing an image into two regions of interest, i.e., the foreground and the background.
Chan, Esedo=glu, Nikolova, SIAM Journal on Applied Mathematics 66(5), 1632-1648, 2006.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Two-region image segmentation is the process of dividing an image into two
regions of interest, i.e., the foreground and the background. To this aim, Chan
et al. [Chan, Esedo\=glu, Nikolova, SIAM Journal on Applied Mathematics 66(5),
1632-1648, 2006] designed a model well suited for smooth images. One drawback
of this model is that it may produce a bad segmentation when the image contains
oscillatory components. Based on a cartoon-texture decomposition of the image
to be segmented, we propose a new model that is able to produce an accurate
segmentation of images also containing noise or oscillatory information like
texture. The novel model leads to a non-smooth constrained optimization problem
which we solve by means of the ADMM method. The convergence of the numerical
scheme is also proved. Several experiments on smooth, noisy, and textural
images show the effectiveness of the proposed model.
Related papers
- UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation [64.01742988773745]
An increasing privacy concern exists regarding training large-scale image segmentation models on unauthorized private data.
We exploit the concept of unlearnable examples to make images unusable to model training by generating and adding unlearnable noise into the original images.
We empirically verify the effectiveness of UnSeg across 6 mainstream image segmentation tasks, 10 widely used datasets, and 7 different network architectures.
arXiv Detail & Related papers (2024-10-13T16:34:46Z) - ZoDi: Zero-Shot Domain Adaptation with Diffusion-Based Image Transfer [13.956618446530559]
This paper proposes a zero-shot domain adaptation method based on diffusion models, called ZoDi.
First, we utilize an off-the-shelf diffusion model to synthesize target-like images by transferring the domain of source images to the target domain.
Secondly, we train the model using both source images and synthesized images with the original representations to learn domain-robust representations.
arXiv Detail & Related papers (2024-03-20T14:58:09Z) - Multi-stream Cell Segmentation with Low-level Cues for Multi-modality
Images [66.79688768141814]
We develop an automatic cell classification pipeline to label microscopy images.
We then train a classification model based on the category labels.
We deploy two types of segmentation models to segment cells with roundish and irregular shapes.
arXiv Detail & Related papers (2023-10-22T08:11:08Z) - Saliency-Driven Active Contour Model for Image Segmentation [2.8348950186890467]
We propose a novel model that uses the advantages of a saliency map with local image information (LIF) and overcomes the drawbacks of previous models.
The proposed model is driven by a saliency map of an image and the local image information to enhance the progress of the active contour models.
arXiv Detail & Related papers (2022-05-23T06:02:52Z) - Diffusion Models for Implicit Image Segmentation Ensembles [1.444701913511243]
We present a novel semantic segmentation method based on diffusion models.
By modifying the training and sampling scheme, we show that diffusion models can perform lesion segmentation of medical images.
Compared to state-of-the-art segmentation models, our approach yields good segmentation results and, additionally, meaningful uncertainty maps.
arXiv Detail & Related papers (2021-12-06T16:28:15Z) - SegDiff: Image Segmentation with Diffusion Probabilistic Models [81.16986859755038]
Diffusion Probabilistic Methods are employed for state-of-the-art image generation.
We present a method for extending such models for performing image segmentation.
The method learns end-to-end, without relying on a pre-trained backbone.
arXiv Detail & Related papers (2021-12-01T10:17:25Z) - Flow-Guided Video Inpainting with Scene Templates [57.12499174362993]
We consider the problem of filling in missing-temporal regions of a video.
We introduce a generative model of images in relation to the scene (without missing regions) and mappings from the scene to images.
We use the model to jointly infer the scene template, a 2D representation of the scene, and the mappings.
arXiv Detail & Related papers (2021-08-29T13:49:13Z) - Image Inpainting Using Wasserstein Generative Adversarial Imputation
Network [0.0]
This paper introduces an image inpainting model based on Wasserstein Generative Adversarial Imputation Network.
A universal imputation model is able to handle various scenarios of missingness with sufficient quality.
arXiv Detail & Related papers (2021-06-23T05:55:07Z) - BoundarySqueeze: Image Segmentation as Boundary Squeezing [104.43159799559464]
We propose a novel method for fine-grained high-quality image segmentation of both objects and scenes.
Inspired by dilation and erosion from morphological image processing techniques, we treat the pixel level segmentation problems as squeezing object boundary.
Our method yields large gains on COCO, Cityscapes, for both instance and semantic segmentation and outperforms previous state-of-the-art PointRend in both accuracy and speed under the same setting.
arXiv Detail & Related papers (2021-05-25T04:58:51Z) - Topology-Preserving 3D Image Segmentation Based On Hyperelastic
Regularization [1.52292571922932]
We propose a novel 3D topology-preserving registration-based segmentation model with the hyperelastic regularization.
Numerical experiments have been carried out on the synthetic and real images, which demonstrate the effectiveness of our proposed model.
arXiv Detail & Related papers (2021-03-31T02:20:46Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.