Improved Image Matting via Real-time User Clicks and Uncertainty
Estimation
- URL: http://arxiv.org/abs/2012.08323v2
- Date: Sun, 7 Mar 2021 07:14:12 GMT
- Title: Improved Image Matting via Real-time User Clicks and Uncertainty
Estimation
- Authors: Tianyi Wei, Dongdong Chen, Wenbo Zhou, Jing Liao, Hanqing Zhao,
Weiming Zhang, Nenghai Yu
- Abstract summary: This paper proposes an improved deep image matting framework which is trimap-free and only needs several user click interactions to eliminate the ambiguity.
We introduce a new uncertainty estimation module that can predict which parts need polishing and a following local refinement module.
Results show that our method performs better than existing trimap-free methods and comparably to state-of-the-art trimap-based methods with minimal user effort.
- Score: 87.84632514927098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image matting is a fundamental and challenging problem in computer vision and
graphics. Most existing matting methods leverage a user-supplied trimap as an
auxiliary input to produce good alpha matte. However, obtaining high-quality
trimap itself is arduous, thus restricting the application of these methods.
Recently, some trimap-free methods have emerged, however, the matting quality
is still far behind the trimap-based methods. The main reason is that, without
the trimap guidance in some cases, the target network is ambiguous about which
is the foreground target. In fact, choosing the foreground is a subjective
procedure and depends on the user's intention. To this end, this paper proposes
an improved deep image matting framework which is trimap-free and only needs
several user click interactions to eliminate the ambiguity. Moreover, we
introduce a new uncertainty estimation module that can predict which parts need
polishing and a following local refinement module. Based on the computation
budget, users can choose how many local parts to improve with the uncertainty
guidance. Quantitative and qualitative results show that our method performs
better than existing trimap-free methods and comparably to state-of-the-art
trimap-based methods with minimal user effort.
Related papers
- Learning Trimaps via Clicks for Image Matting [103.6578944248185]
We introduce Click2Trimap, an interactive model capable of predicting high-quality trimaps and alpha mattes with minimal user click inputs.
In the user study, Click2Trimap achieves high-quality trimap and matting predictions in just an average of 5 seconds per image.
arXiv Detail & Related papers (2024-03-30T12:10:34Z) - Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - PP-Matting: High-Accuracy Natural Image Matting [11.68134059283327]
PP-Matting is a trimap-free architecture that can achieve high-accuracy natural image matting.
Our method applies a high-resolution detail branch (HRDB) that extracts fine-grained details of the foreground.
Also, we propose a semantic context branch (SCB) that adopts a semantic segmentation subtask.
arXiv Detail & Related papers (2022-04-20T12:54:06Z) - Deep Image Matting with Flexible Guidance Input [16.651948566049846]
We propose a matting method that use Flexible Guidance Input as user hint.
Our method can achieve state-of-the-art results comparing with existing trimap-based and trimap-free methods.
arXiv Detail & Related papers (2021-10-21T04:59:27Z) - Towards Unpaired Depth Enhancement and Super-Resolution in the Wild [121.96527719530305]
State-of-the-art data-driven methods of depth map super-resolution rely on registered pairs of low- and high-resolution depth maps of the same scenes.
We consider an approach to depth map enhancement based on learning from unpaired data.
arXiv Detail & Related papers (2021-05-25T16:19:16Z) - Semantic Image Matting [75.21022252141474]
We show how to obtain better alpha mattes by incorporating into our framework semantic classification of matting regions.
Specifically, we consider and learn 20 classes of matting patterns, and propose to extend the conventional trimap to semantic trimap.
Experiments on multiple benchmarks show that our method outperforms other methods and has achieved the most competitive state-of-the-art performance.
arXiv Detail & Related papers (2021-04-16T16:21:02Z) - Human Perception Modeling for Automatic Natural Image Matting [2.179313476241343]
Natural image matting aims to precisely separate foreground objects from background using alpha matte.
We propose an intuitively-designed trimap-free two-stage matting approach without additional annotations.
Our matting algorithm has competitive performance with current state-of-the-art methods in both trimap-free and trimap-needed aspects.
arXiv Detail & Related papers (2021-03-31T12:08:28Z) - Salient Image Matting [0.0]
We propose an image matting framework called Salient Image Matting to estimate the per-pixel opacity value of the most salient foreground in an image.
Our framework simultaneously deals with the challenge of learning a wide range of semantics and salient object types.
Our framework requires only a fraction of expensive matting data as compared to other automatic methods.
arXiv Detail & Related papers (2021-03-23T06:22:33Z) - Image Matching across Wide Baselines: From Paper to Practice [80.9424750998559]
We introduce a comprehensive benchmark for local features and robust estimation algorithms.
Our pipeline's modular structure allows easy integration, configuration, and combination of different methods.
We show that with proper settings, classical solutions may still outperform the perceived state of the art.
arXiv Detail & Related papers (2020-03-03T15:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.