Smart Scribbles for Image Mating
- URL: http://arxiv.org/abs/2103.17062v1
- Date: Wed, 31 Mar 2021 13:30:49 GMT
- Title: Smart Scribbles for Image Mating
- Authors: Xin Yang, Yu Qiao, Shaozhe Chen, Shengfeng He, Baocai Yin, Qiang
Zhang, Xiaopeng Wei, Rynson W.H.Lau
- Abstract summary: We propose an interactive framework, referred to as smart scribbles, to guide users to draw few scribbles on the input images.
It infers the most informative regions of an image for drawing scribbles to indicate different categories.
It then spreads these scribbles to the rest of the image via our well-designed two-phase propagation.
- Score: 90.18035889903909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image matting is an ill-posed problem that usually requires additional user
input, such as trimaps or scribbles. Drawing a fne trimap requires a large
amount of user effort, while using scribbles can hardly obtain satisfactory
alpha mattes for non-professional users. Some recent deep learning-based
matting networks rely on large-scale composite datasets for training to improve
performance, resulting in the occasional appearance of obvious artifacts when
processing natural images. In this article, we explore the intrinsic
relationship between user input and alpha mattes and strike a balance between
user effort and the quality of alpha mattes. In particular, we propose an
interactive framework, referred to as smart scribbles, to guide users to draw
few scribbles on the input images to produce high-quality alpha mattes. It frst
infers the most informative regions of an image for drawing scribbles to
indicate different categories (foreground, background, or unknown) and then
spreads these scribbles (i.e., the category labels) to the rest of the image
via our well-designed two-phase propagation. Both neighboring low-level
afnities and high-level semantic features are considered during the propagation
process. Our method can be optimized without large-scale matting datasets and
exhibits more universality in real situations. Extensive experiments
demonstrate that smart scribbles can produce more accurate alpha mattes with
reduced additional input, compared to the state-of-the-art matting methods.
Related papers
- Deep Image Matting: A Comprehensive Survey [85.77905619102802]
This paper presents a review of recent advancements in image matting in the era of deep learning.
We focus on two fundamental sub-tasks: auxiliary input-based image matting and automatic image matting.
We discuss relevant applications of image matting and highlight existing challenges and potential opportunities for future research.
arXiv Detail & Related papers (2023-04-10T15:48:55Z) - Hierarchical and Progressive Image Matting [40.291998690687514]
We propose an end-to-end Hierarchical and Progressive Attention Matting Network (HAttMatting++)
It can better predict the opacity of the foreground from single RGB images without additional input.
We construct a large-scale and challenging image matting dataset comprised of 59, 600 training images and 1000 test images.
arXiv Detail & Related papers (2022-10-13T11:16:49Z) - Deep Automatic Natural Image Matting [82.56853587380168]
Automatic image matting (AIM) refers to estimating the soft foreground from an arbitrary natural image without any auxiliary input like trimap.
We propose a novel end-to-end matting network, which can predict a generalized trimap for any image of the above types as a unified semantic representation.
Our network trained on available composite matting datasets outperforms existing methods both objectively and subjectively.
arXiv Detail & Related papers (2021-07-15T10:29:01Z) - Semantic Image Matting [75.21022252141474]
We show how to obtain better alpha mattes by incorporating into our framework semantic classification of matting regions.
Specifically, we consider and learn 20 classes of matting patterns, and propose to extend the conventional trimap to semantic trimap.
Experiments on multiple benchmarks show that our method outperforms other methods and has achieved the most competitive state-of-the-art performance.
arXiv Detail & Related papers (2021-04-16T16:21:02Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - High-Resolution Deep Image Matting [39.72708676319803]
HDMatt is a first deep learning based image matting approach for high-resolution inputs.
Our proposed method sets new state-of-the-art performance on Adobe Image Matting and AlphaMatting benchmarks.
arXiv Detail & Related papers (2020-09-14T17:53:15Z) - Natural Image Matting via Guided Contextual Attention [18.034160025888056]
We develop a novel end-to-end approach for natural image matting with a guided contextual attention module.
The proposed method can mimic information flow of affinity-based methods and utilize rich features learned by deep neural networks simultaneously.
Experiment results on Composition-1k testing set and alphamatting.com benchmark dataset demonstrate that our method outperforms state-of-the-art approaches in natural image matting.
arXiv Detail & Related papers (2020-01-13T05:59:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.