Learning Trimaps via Clicks for Image Matting
- URL: http://arxiv.org/abs/2404.00335v2
- Date: Sat, 6 Apr 2024 15:25:51 GMT
- Title: Learning Trimaps via Clicks for Image Matting
- Authors: Chenyi Zhang, Yihan Hu, Henghui Ding, Humphrey Shi, Yao Zhao, Yunchao Wei,
- Abstract summary: We introduce Click2Trimap, an interactive model capable of predicting high-quality trimaps and alpha mattes with minimal user click inputs.
In the user study, Click2Trimap achieves high-quality trimap and matting predictions in just an average of 5 seconds per image.
- Score: 103.6578944248185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite significant advancements in image matting, existing models heavily depend on manually-drawn trimaps for accurate results in natural image scenarios. However, the process of obtaining trimaps is time-consuming, lacking user-friendliness and device compatibility. This reliance greatly limits the practical application of all trimap-based matting methods. To address this issue, we introduce Click2Trimap, an interactive model capable of predicting high-quality trimaps and alpha mattes with minimal user click inputs. Through analyzing real users' behavioral logic and characteristics of trimaps, we successfully propose a powerful iterative three-class training strategy and a dedicated simulation function, making Click2Trimap exhibit versatility across various scenarios. Quantitative and qualitative assessments on synthetic and real-world matting datasets demonstrate Click2Trimap's superior performance compared to all existing trimap-free matting methods. Especially, in the user study, Click2Trimap achieves high-quality trimap and matting predictions in just an average of 5 seconds per image, demonstrating its substantial practical value in real-world applications.
Related papers
- Neural Semantic Surface Maps [52.61017226479506]
We present an automated technique for computing a map between two genus-zero shapes, which matches semantically corresponding regions to one another.
Our approach can generate semantic surface-to-surface maps, eliminating manual annotations or any 3D training data requirement.
arXiv Detail & Related papers (2023-09-09T16:21:56Z) - Matte Anything: Interactive Natural Image Matting with Segment Anything
Models [35.105593013654]
Matte Anything (MatAny) is an interactive natural image matting model that could produce high-quality alpha-matte.
We leverage vision foundation models to enhance the performance of natural image matting.
MatAny has 58.3% improvement on MSE and 40.6% improvement on SAD compared to the previous image matting methods.
arXiv Detail & Related papers (2023-06-07T03:31:39Z) - Adaptive Human Matting for Dynamic Videos [62.026375402656754]
Adaptive Matting for Dynamic Videos, termed AdaM, is a framework for simultaneously differentiating foregrounds from backgrounds.
Two interconnected network designs are employed to achieve this goal.
We benchmark and study our methods recently introduced datasets, showing that our matting achieves new best-in-class generalizability.
arXiv Detail & Related papers (2023-04-12T17:55:59Z) - Deep Image Matting with Flexible Guidance Input [16.651948566049846]
We propose a matting method that use Flexible Guidance Input as user hint.
Our method can achieve state-of-the-art results comparing with existing trimap-based and trimap-free methods.
arXiv Detail & Related papers (2021-10-21T04:59:27Z) - Semantic Image Matting [75.21022252141474]
We show how to obtain better alpha mattes by incorporating into our framework semantic classification of matting regions.
Specifically, we consider and learn 20 classes of matting patterns, and propose to extend the conventional trimap to semantic trimap.
Experiments on multiple benchmarks show that our method outperforms other methods and has achieved the most competitive state-of-the-art performance.
arXiv Detail & Related papers (2021-04-16T16:21:02Z) - Human Perception Modeling for Automatic Natural Image Matting [2.179313476241343]
Natural image matting aims to precisely separate foreground objects from background using alpha matte.
We propose an intuitively-designed trimap-free two-stage matting approach without additional annotations.
Our matting algorithm has competitive performance with current state-of-the-art methods in both trimap-free and trimap-needed aspects.
arXiv Detail & Related papers (2021-03-31T12:08:28Z) - Salient Image Matting [0.0]
We propose an image matting framework called Salient Image Matting to estimate the per-pixel opacity value of the most salient foreground in an image.
Our framework simultaneously deals with the challenge of learning a wide range of semantics and salient object types.
Our framework requires only a fraction of expensive matting data as compared to other automatic methods.
arXiv Detail & Related papers (2021-03-23T06:22:33Z) - Improved Image Matting via Real-time User Clicks and Uncertainty
Estimation [87.84632514927098]
This paper proposes an improved deep image matting framework which is trimap-free and only needs several user click interactions to eliminate the ambiguity.
We introduce a new uncertainty estimation module that can predict which parts need polishing and a following local refinement module.
Results show that our method performs better than existing trimap-free methods and comparably to state-of-the-art trimap-based methods with minimal user effort.
arXiv Detail & Related papers (2020-12-15T14:32:36Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.