PP-Matting: High-Accuracy Natural Image Matting
- URL: http://arxiv.org/abs/2204.09433v1
- Date: Wed, 20 Apr 2022 12:54:06 GMT
- Title: PP-Matting: High-Accuracy Natural Image Matting
- Authors: Guowei Chen, Yi Liu, Jian Wang, Juncai Peng, Yuying Hao, Lutao Chu,
Shiyu Tang, Zewu Wu, Zeyu Chen, Zhiliang Yu, Yuning Du, Qingqing Dang,
Xiaoguang Hu, Dianhai Yu
- Abstract summary: PP-Matting is a trimap-free architecture that can achieve high-accuracy natural image matting.
Our method applies a high-resolution detail branch (HRDB) that extracts fine-grained details of the foreground.
Also, we propose a semantic context branch (SCB) that adopts a semantic segmentation subtask.
- Score: 11.68134059283327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural image matting is a fundamental and challenging computer vision task.
It has many applications in image editing and composition. Recently, deep
learning-based approaches have achieved great improvements in image matting.
However, most of them require a user-supplied trimap as an auxiliary input,
which limits the matting applications in the real world. Although some
trimap-free approaches have been proposed, the matting quality is still
unsatisfactory compared to trimap-based ones. Without the trimap guidance, the
matting models suffer from foreground-background ambiguity easily, and also
generate blurry details in the transition area. In this work, we propose
PP-Matting, a trimap-free architecture that can achieve high-accuracy natural
image matting. Our method applies a high-resolution detail branch (HRDB) that
extracts fine-grained details of the foreground with keeping feature resolution
unchanged. Also, we propose a semantic context branch (SCB) that adopts a
semantic segmentation subtask. It prevents the detail prediction from local
ambiguity caused by semantic context missing. In addition, we conduct extensive
experiments on two well-known benchmarks: Composition-1k and Distinctions-646.
The results demonstrate the superiority of PP-Matting over previous methods.
Furthermore, we provide a qualitative evaluation of our method on human matting
which shows its outstanding performance in the practical application. The code
and pre-trained models will be available at PaddleSeg:
https://github.com/PaddlePaddle/PaddleSeg.
Related papers
- Deep Image Matting: A Comprehensive Survey [85.77905619102802]
This paper presents a review of recent advancements in image matting in the era of deep learning.
We focus on two fundamental sub-tasks: auxiliary input-based image matting and automatic image matting.
We discuss relevant applications of image matting and highlight existing challenges and potential opportunities for future research.
arXiv Detail & Related papers (2023-04-10T15:48:55Z) - Disentangled Pre-training for Image Matting [74.10407744483526]
Image matting requires high-quality pixel-level human annotations to support the training of a deep model.
We propose a self-supervised pre-training approach that can leverage infinite numbers of data to boost the matting performance.
arXiv Detail & Related papers (2023-04-03T08:16:02Z) - Deep Automatic Natural Image Matting [82.56853587380168]
Automatic image matting (AIM) refers to estimating the soft foreground from an arbitrary natural image without any auxiliary input like trimap.
We propose a novel end-to-end matting network, which can predict a generalized trimap for any image of the above types as a unified semantic representation.
Our network trained on available composite matting datasets outperforms existing methods both objectively and subjectively.
arXiv Detail & Related papers (2021-07-15T10:29:01Z) - Semantic Image Matting [75.21022252141474]
We show how to obtain better alpha mattes by incorporating into our framework semantic classification of matting regions.
Specifically, we consider and learn 20 classes of matting patterns, and propose to extend the conventional trimap to semantic trimap.
Experiments on multiple benchmarks show that our method outperforms other methods and has achieved the most competitive state-of-the-art performance.
arXiv Detail & Related papers (2021-04-16T16:21:02Z) - Human Perception Modeling for Automatic Natural Image Matting [2.179313476241343]
Natural image matting aims to precisely separate foreground objects from background using alpha matte.
We propose an intuitively-designed trimap-free two-stage matting approach without additional annotations.
Our matting algorithm has competitive performance with current state-of-the-art methods in both trimap-free and trimap-needed aspects.
arXiv Detail & Related papers (2021-03-31T12:08:28Z) - Salient Image Matting [0.0]
We propose an image matting framework called Salient Image Matting to estimate the per-pixel opacity value of the most salient foreground in an image.
Our framework simultaneously deals with the challenge of learning a wide range of semantics and salient object types.
Our framework requires only a fraction of expensive matting data as compared to other automatic methods.
arXiv Detail & Related papers (2021-03-23T06:22:33Z) - Towards Enhancing Fine-grained Details for Image Matting [40.17208660790402]
We argue that recovering microscopic details relies on low-level but high-definition texture features.
Our model consists of a conventional encoder-decoder Semantic Path and an independent down-sampling-free Textural Compensate Path.
Our method outperforms previous start-of-the-art methods on the Composition-1k dataset.
arXiv Detail & Related papers (2021-01-22T13:20:23Z) - Improved Image Matting via Real-time User Clicks and Uncertainty
Estimation [87.84632514927098]
This paper proposes an improved deep image matting framework which is trimap-free and only needs several user click interactions to eliminate the ambiguity.
We introduce a new uncertainty estimation module that can predict which parts need polishing and a following local refinement module.
Results show that our method performs better than existing trimap-free methods and comparably to state-of-the-art trimap-based methods with minimal user effort.
arXiv Detail & Related papers (2020-12-15T14:32:36Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.