End-to-End Human Instance Matting
- URL: http://arxiv.org/abs/2403.01510v1
- Date: Sun, 3 Mar 2024 13:17:10 GMT
- Title: End-to-End Human Instance Matting
- Authors: Qinglin Liu, Shengping Zhang, Quanling Meng, Bineng Zhong, Peiqiang
Liu, Hongxun Yao
- Abstract summary: Human instance matting aims to estimate an alpha matte for each human instance in an image.
This paper proposes a novel End-to-End Human Instance Matting (E2E-HIM) framework for simultaneous multiple instance matting.
E2E-HIM outperforms the existing methods on human instance matting with 50% lower errors and 5X faster speed.
- Score: 27.96723058460764
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human instance matting aims to estimate an alpha matte for each human
instance in an image, which is extremely challenging and has rarely been
studied so far. Despite some efforts to use instance segmentation to generate a
trimap for each instance and apply trimap-based matting methods, the resulting
alpha mattes are often inaccurate due to inaccurate segmentation. In addition,
this approach is computationally inefficient due to multiple executions of the
matting method. To address these problems, this paper proposes a novel
End-to-End Human Instance Matting (E2E-HIM) framework for simultaneous multiple
instance matting in a more efficient manner. Specifically, a general perception
network first extracts image features and decodes instance contexts into latent
codes. Then, a united guidance network exploits spatial attention and semantics
embedding to generate united semantics guidance, which encodes the locations
and semantic correspondences of all instances. Finally, an instance matting
network decodes the image features and united semantics guidance to predict all
instance-level alpha mattes. In addition, we construct a large-scale human
instance matting dataset (HIM-100K) comprising over 100,000 human images with
instance alpha matte labels. Experiments on HIM-100K demonstrate the proposed
E2E-HIM outperforms the existing methods on human instance matting with 50%
lower errors and 5X faster speed (6 instances in a 640X640 image). Experiments
on the PPM-100, RWP-636, and P3M datasets demonstrate that E2E-HIM also
achieves competitive performance on traditional human matting.
Related papers
- MaGGIe: Masked Guided Gradual Human Instance Matting [71.22209465934651]
We propose a new framework MaGGIe, Masked Guided Gradual Human Instance Matting.
It predicts alpha mattes progressively for each human instances while maintaining the computational cost, precision, and consistency.
arXiv Detail & Related papers (2024-04-24T17:59:53Z) - Towards Label-Efficient Human Matting: A Simple Baseline for Weakly Semi-Supervised Trimap-Free Human Matting [50.99997483069828]
We introduce a new learning paradigm, weakly semi-supervised human matting (WSSHM)
WSSHM uses a small amount of expensive matte labels and a large amount of budget-friendly segmentation labels to save the annotation cost and resolve the domain generalization problem.
Our training method is also easily applicable to real-time models, achieving competitive accuracy with breakneck inference speed.
arXiv Detail & Related papers (2024-04-01T04:53:06Z) - SGM-Net: Semantic Guided Matting Net [5.126872642595207]
We propose a module to generate foreground probability map and add it to MODNet to obtain Semantic Guided Matting Net (SGM-Net)
Under the condition of only one image, we can realize the human matting task.
arXiv Detail & Related papers (2022-08-16T01:58:25Z) - UniInst: Unique Representation for End-to-End Instance Segmentation [29.974973664317485]
We propose a box-free and NMS-free end-to-end instance segmentation framework, termed UniInst.
Specifically, we design an instance-aware one-to-one assignment scheme, which dynamically assigns one unique representation to each instance.
With these techniques, our UniInst, the first FCN-based end-to-end instance segmentation framework, achieves competitive performance.
arXiv Detail & Related papers (2022-05-25T10:40:26Z) - Human Instance Matting via Mutual Guidance and Multi-Instance Refinement [70.06185123355249]
We introduce a new matting task called human instance matting (HIM)
HIM requires the pertinent model to automatically predict a precise alpha matte for each human instance.
Preliminary results are presented on general instance matting.
arXiv Detail & Related papers (2022-05-22T06:56:52Z) - Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From
Learned Pairwise Affinity [59.1823948436411]
We propose a novel approach for mask proposals, Generic Grouping Networks (GGNs)
Our approach combines a local measure of pixel affinity with instance-level mask supervision, producing a training regimen designed to make the model as generic as the data diversity allows.
arXiv Detail & Related papers (2022-04-12T22:37:49Z) - Sparse Instance Activation for Real-Time Instance Segmentation [72.23597664935684]
We propose a conceptually novel, efficient, and fully convolutional framework for real-time instance segmentation.
SparseInst has extremely fast inference speed and achieves 40 FPS and 37.9 AP on the COCO benchmark.
arXiv Detail & Related papers (2022-03-24T03:15:39Z) - Virtual Multi-Modality Self-Supervised Foreground Matting for
Human-Object Interaction [18.14237514372724]
We propose a Virtual Multi-modality Foreground Matting (VMFM) method to learn human-object interactive foreground.
VMFM method requires no additional inputs, e.g. trimap or known background.
We reformulate foreground matting as a self-supervised multi-modality problem.
arXiv Detail & Related papers (2021-10-07T09:03:01Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.