Adversarially-Guided Portrait Matting
- URL: http://arxiv.org/abs/2305.02981v2
- Date: Tue, 23 May 2023 11:50:01 GMT
- Title: Adversarially-Guided Portrait Matting
- Authors: Sergej Chicherin, Karen Efremyan
- Abstract summary: We present a method for generating alpha mattes using a limited data source.
We pretrain a novel transformerbased model (StyleMatte) on portrait datasets.
We utilize this model to provide image-mask pairs for the StyleGAN3-based network (StyleMatteGAN)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a method for generating alpha mattes using a limited data source.
We pretrain a novel transformerbased model (StyleMatte) on portrait datasets.
We utilize this model to provide image-mask pairs for the StyleGAN3-based
network (StyleMatteGAN). This network is trained unsupervisedly and generates
previously unseen imagemask training pairs that are fed back to StyleMatte. We
demonstrate that the performance of the matte pulling network improves during
this cycle and obtains top results on the human portraits and state-of-the-art
metrics on animals dataset. Furthermore, StyleMatteGAN provides
high-resolution, privacy-preserving portraits with alpha mattes, making it
suitable for various image composition tasks. Our code is available at
https://github.com/chroneus/stylematte
Related papers
- Towards Natural Image Matting in the Wild via Real-Scenario Prior [69.96414467916863]
We propose a new matting dataset based on the COCO dataset, namely COCO-Matting.
The built COCO-Matting comprises an extensive collection of 38,251 human instance-level alpha mattes in complex natural scenarios.
For network architecture, the proposed feature-aligned transformer learns to extract fine-grained edge and transparency features.
The proposed matte-aligned decoder aims to segment matting-specific objects and convert coarse masks into high-precision mattes.
arXiv Detail & Related papers (2024-10-09T06:43:19Z) - DiffusionMat: Alpha Matting as Sequential Refinement Learning [87.76572845943929]
DiffusionMat is an image matting framework that employs a diffusion model for the transition from coarse to refined alpha mattes.
A correction module adjusts the output at each denoising step, ensuring that the final result is consistent with the input image's structures.
We evaluate our model across several image matting benchmarks, and the results indicate that DiffusionMat consistently outperforms existing methods.
arXiv Detail & Related papers (2023-11-22T17:16:44Z) - Matte Anything: Interactive Natural Image Matting with Segment Anything
Models [35.105593013654]
Matte Anything (MatAny) is an interactive natural image matting model that could produce high-quality alpha-matte.
We leverage vision foundation models to enhance the performance of natural image matting.
MatAny has 58.3% improvement on MSE and 40.6% improvement on SAD compared to the previous image matting methods.
arXiv Detail & Related papers (2023-06-07T03:31:39Z) - Self-supervised Matting-specific Portrait Enhancement and Generation [40.444011984347505]
We use StyleGAN to explore the latent space of GAN models.
We optimize multi-scale latent vectors in the latent spaces under four tailored losses.
We show that the proposed method can refine real portrait images for arbitrary matting models.
arXiv Detail & Related papers (2022-08-13T09:00:02Z) - Learning Co-segmentation by Segment Swapping for Retrieval and Discovery [67.6609943904996]
The goal of this work is to efficiently identify visually similar patterns from a pair of images.
We generate synthetic training pairs by selecting object segments in an image and copy-pasting them into another image.
We show our approach provides clear improvements for artwork details retrieval on the Brueghel dataset.
arXiv Detail & Related papers (2021-10-29T16:51:16Z) - Deep Automatic Natural Image Matting [82.56853587380168]
Automatic image matting (AIM) refers to estimating the soft foreground from an arbitrary natural image without any auxiliary input like trimap.
We propose a novel end-to-end matting network, which can predict a generalized trimap for any image of the above types as a unified semantic representation.
Our network trained on available composite matting datasets outperforms existing methods both objectively and subjectively.
arXiv Detail & Related papers (2021-07-15T10:29:01Z) - Alpha Matte Generation from Single Input for Portrait Matting [79.62140902232628]
The goal is to predict an alpha matte that identifies the effect of each pixel on the foreground subject.
Traditional approaches and most of the existing works utilized an additional input, e.g., trimap, background image, to predict alpha matte.
We introduce an additional input-free approach to perform portrait matting using Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2021-06-06T18:53:42Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z) - $F$, $B$, Alpha Matting [0.0]
We propose a low-cost modification to alpha matting networks to also predict the foreground and background colours.
Our method achieves the state of the art performance on the Adobe Composition-1k dataset for alpha matte and composite colour quality.
arXiv Detail & Related papers (2020-03-17T13:27:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.