Deep Automatic Natural Image Matting
- URL: http://arxiv.org/abs/2107.07235v1
- Date: Thu, 15 Jul 2021 10:29:01 GMT
- Title: Deep Automatic Natural Image Matting
- Authors: Jizhizi Li, Jing Zhang, Dacheng Tao
- Abstract summary: Automatic image matting (AIM) refers to estimating the soft foreground from an arbitrary natural image without any auxiliary input like trimap.
We propose a novel end-to-end matting network, which can predict a generalized trimap for any image of the above types as a unified semantic representation.
Our network trained on available composite matting datasets outperforms existing methods both objectively and subjectively.
- Score: 82.56853587380168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic image matting (AIM) refers to estimating the soft foreground from
an arbitrary natural image without any auxiliary input like trimap, which is
useful for image editing. Prior methods try to learn semantic features to aid
the matting process while being limited to images with salient opaque
foregrounds such as humans and animals. In this paper, we investigate the
difficulties when extending them to natural images with salient
transparent/meticulous foregrounds or non-salient foregrounds. To address the
problem, a novel end-to-end matting network is proposed, which can predict a
generalized trimap for any image of the above types as a unified semantic
representation. Simultaneously, the learned semantic features guide the matting
network to focus on the transition areas via an attention mechanism. We also
construct a test set AIM-500 that contains 500 diverse natural images covering
all types along with manually labeled alpha mattes, making it feasible to
benchmark the generalization ability of AIM models. Results of the experiments
demonstrate that our network trained on available composite matting datasets
outperforms existing methods both objectively and subjectively. The source code
and dataset are available at https://github.com/JizhiziLi/AIM.
Related papers
- Boosting General Trimap-free Matting in the Real-World Image [0.0]
We propose a network called textbfMulti-textbfFeature fusion-based textbfCoarse-to-fine Network textbf(MFC-Net).
Our method is significantly effective on both synthetic and real-world images, and the performance in the real-world dataset is far better than existing matting-free methods.
arXiv Detail & Related papers (2024-05-28T07:37:44Z) - Deep Image Matting: A Comprehensive Survey [85.77905619102802]
This paper presents a review of recent advancements in image matting in the era of deep learning.
We focus on two fundamental sub-tasks: auxiliary input-based image matting and automatic image matting.
We discuss relevant applications of image matting and highlight existing challenges and potential opportunities for future research.
arXiv Detail & Related papers (2023-04-10T15:48:55Z) - PP-Matting: High-Accuracy Natural Image Matting [11.68134059283327]
PP-Matting is a trimap-free architecture that can achieve high-accuracy natural image matting.
Our method applies a high-resolution detail branch (HRDB) that extracts fine-grained details of the foreground.
Also, we propose a semantic context branch (SCB) that adopts a semantic segmentation subtask.
arXiv Detail & Related papers (2022-04-20T12:54:06Z) - Semantic Image Matting [75.21022252141474]
We show how to obtain better alpha mattes by incorporating into our framework semantic classification of matting regions.
Specifically, we consider and learn 20 classes of matting patterns, and propose to extend the conventional trimap to semantic trimap.
Experiments on multiple benchmarks show that our method outperforms other methods and has achieved the most competitive state-of-the-art performance.
arXiv Detail & Related papers (2021-04-16T16:21:02Z) - Smart Scribbles for Image Mating [90.18035889903909]
We propose an interactive framework, referred to as smart scribbles, to guide users to draw few scribbles on the input images.
It infers the most informative regions of an image for drawing scribbles to indicate different categories.
It then spreads these scribbles to the rest of the image via our well-designed two-phase propagation.
arXiv Detail & Related papers (2021-03-31T13:30:49Z) - Human Perception Modeling for Automatic Natural Image Matting [2.179313476241343]
Natural image matting aims to precisely separate foreground objects from background using alpha matte.
We propose an intuitively-designed trimap-free two-stage matting approach without additional annotations.
Our matting algorithm has competitive performance with current state-of-the-art methods in both trimap-free and trimap-needed aspects.
arXiv Detail & Related papers (2021-03-31T12:08:28Z) - Salient Image Matting [0.0]
We propose an image matting framework called Salient Image Matting to estimate the per-pixel opacity value of the most salient foreground in an image.
Our framework simultaneously deals with the challenge of learning a wide range of semantics and salient object types.
Our framework requires only a fraction of expensive matting data as compared to other automatic methods.
arXiv Detail & Related papers (2021-03-23T06:22:33Z) - Self-Adaptively Learning to Demoire from Focused and Defocused Image
Pairs [97.67638106818613]
Moire artifacts are common in digital photography, resulting from the interference between high-frequency scene content and the color filter array of the camera.
Existing deep learning-based demoireing methods trained on large scale iteration are limited in handling various complex moire patterns.
We propose a self-adaptive learning method for demoireing a high-frequency image, with the help of an additional defocused moire-free blur image.
arXiv Detail & Related papers (2020-11-03T23:09:02Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.