Salient Image Matting
- URL: http://arxiv.org/abs/2103.12337v1
- Date: Tue, 23 Mar 2021 06:22:33 GMT
- Title: Salient Image Matting
- Authors: Rahul Deora, Rishab Sharma and Dinesh Samuel Sathia Raj
- Abstract summary: We propose an image matting framework called Salient Image Matting to estimate the per-pixel opacity value of the most salient foreground in an image.
Our framework simultaneously deals with the challenge of learning a wide range of semantics and salient object types.
Our framework requires only a fraction of expensive matting data as compared to other automatic methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose an image matting framework called Salient Image
Matting to estimate the per-pixel opacity value of the most salient foreground
in an image. To deal with a large amount of semantic diversity in images, a
trimap is conventionally required as it provides important guidance about
object semantics to the matting process. However, creating a good trimap is
often expensive and timeconsuming. The SIM framework simultaneously deals with
the challenge of learning a wide range of semantics and salient object types in
a fully automatic and an end to end manner. Specifically, our framework is able
to produce accurate alpha mattes for a wide range of foreground objects and
cases where the foreground class, such as human, appears in a very different
context than the train data directly from an RGB input. This is done by
employing a salient object detection model to produce a trimap of the most
salient object in the image in order to guide the matting model about
higher-level object semantics. Our framework leverages large amounts of coarse
annotations coupled with a heuristic trimap generation scheme to train the
trimap prediction network so it can produce trimaps for arbitrary foregrounds.
Moreover, we introduce a multi-scale fusion architecture for the task of
matting to better capture finer, low-level opacity semantics. With high-level
guidance provided by the trimap network, our framework requires only a fraction
of expensive matting data as compared to other automatic methods while being
able to produce alpha mattes for a diverse range of inputs. We demonstrate our
framework on a range of diverse images and experimental results show our
framework compares favourably against state of art matting methods without the
need for a trimap
Related papers
- Learning Trimaps via Clicks for Image Matting [103.6578944248185]
We introduce Click2Trimap, an interactive model capable of predicting high-quality trimaps and alpha mattes with minimal user click inputs.
In the user study, Click2Trimap achieves high-quality trimap and matting predictions in just an average of 5 seconds per image.
arXiv Detail & Related papers (2024-03-30T12:10:34Z) - TransMatting: Enhancing Transparent Objects Matting with Transformers [4.012340049240327]
We propose a Transformer-based network, TransMatting, to model transparent objects with a big receptive field.
A small convolutional network is proposed to utilize the global feature and non-background mask to guide the multi-scale feature propagation from encoder to decoder.
We create a high-resolution matting dataset of transparent objects with small known foreground areas.
arXiv Detail & Related papers (2022-08-05T06:44:14Z) - PP-Matting: High-Accuracy Natural Image Matting [11.68134059283327]
PP-Matting is a trimap-free architecture that can achieve high-accuracy natural image matting.
Our method applies a high-resolution detail branch (HRDB) that extracts fine-grained details of the foreground.
Also, we propose a semantic context branch (SCB) that adopts a semantic segmentation subtask.
arXiv Detail & Related papers (2022-04-20T12:54:06Z) - Deep Automatic Natural Image Matting [82.56853587380168]
Automatic image matting (AIM) refers to estimating the soft foreground from an arbitrary natural image without any auxiliary input like trimap.
We propose a novel end-to-end matting network, which can predict a generalized trimap for any image of the above types as a unified semantic representation.
Our network trained on available composite matting datasets outperforms existing methods both objectively and subjectively.
arXiv Detail & Related papers (2021-07-15T10:29:01Z) - Semantic Image Matting [75.21022252141474]
We show how to obtain better alpha mattes by incorporating into our framework semantic classification of matting regions.
Specifically, we consider and learn 20 classes of matting patterns, and propose to extend the conventional trimap to semantic trimap.
Experiments on multiple benchmarks show that our method outperforms other methods and has achieved the most competitive state-of-the-art performance.
arXiv Detail & Related papers (2021-04-16T16:21:02Z) - Smart Scribbles for Image Mating [90.18035889903909]
We propose an interactive framework, referred to as smart scribbles, to guide users to draw few scribbles on the input images.
It infers the most informative regions of an image for drawing scribbles to indicate different categories.
It then spreads these scribbles to the rest of the image via our well-designed two-phase propagation.
arXiv Detail & Related papers (2021-03-31T13:30:49Z) - Human Perception Modeling for Automatic Natural Image Matting [2.179313476241343]
Natural image matting aims to precisely separate foreground objects from background using alpha matte.
We propose an intuitively-designed trimap-free two-stage matting approach without additional annotations.
Our matting algorithm has competitive performance with current state-of-the-art methods in both trimap-free and trimap-needed aspects.
arXiv Detail & Related papers (2021-03-31T12:08:28Z) - Improved Image Matting via Real-time User Clicks and Uncertainty
Estimation [87.84632514927098]
This paper proposes an improved deep image matting framework which is trimap-free and only needs several user click interactions to eliminate the ambiguity.
We introduce a new uncertainty estimation module that can predict which parts need polishing and a following local refinement module.
Results show that our method performs better than existing trimap-free methods and comparably to state-of-the-art trimap-based methods with minimal user effort.
arXiv Detail & Related papers (2020-12-15T14:32:36Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.