Privacy-Preserving Portrait Matting
- URL: http://arxiv.org/abs/2104.14222v1
- Date: Thu, 29 Apr 2021 09:20:19 GMT
- Title: Privacy-Preserving Portrait Matting
- Authors: Jizhizi Li, Sihan Ma, Jing Zhang, Dacheng Tao
- Abstract summary: We present P3M-10k, the first large-scale anonymized benchmark for Privacy-Preserving Portrait Matting.
P3M-10k consists of 10,000 high-resolution face-blurred portrait images along with high-quality alpha mattes.
We propose P3M-Net, which leverages the power of a unified framework for both semantic perception and detail matting.
- Score: 73.98225485513905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been an increasing concern about the privacy issue raised
by using personally identifiable information in machine learning. However,
previous portrait matting methods were all based on identifiable portrait
images. To fill the gap, we present P3M-10k in this paper, which is the first
large-scale anonymized benchmark for Privacy-Preserving Portrait Matting.
P3M-10k consists of 10,000 high-resolution face-blurred portrait images along
with high-quality alpha mattes. We systematically evaluate both trimap-free and
trimap-based matting methods on P3M-10k and find that existing matting methods
show different generalization capabilities when following the
Privacy-Preserving Training (PPT) setting, i.e., "training on face-blurred
images and testing on arbitrary images". To devise a better trimap-free
portrait matting model, we propose P3M-Net, which leverages the power of a
unified framework for both semantic perception and detail matting, and
specifically emphasizes the interaction between them and the encoder to
facilitate the matting process. Extensive experiments on P3M-10k demonstrate
that P3M-Net outperforms the state-of-the-art methods in terms of both
objective metrics and subjective visual quality. Besides, it shows good
generalization capacity under the PPT setting, confirming the value of P3M-10k
for facilitating future research and enabling potential real-world
applications. The source code and dataset will be made publicly available.
Related papers
- Efficient Portrait Matte Creation With Layer Diffusion and Connectivity Priors [16.916645195696137]
This work shows that one can leverage text prompts to generate high-quality portrait foregrounds and extract latent portrait mattes.
A large-scale portrait matting dataset is created, termed LD-Portrait-20K, with $20,051$ portrait foregrounds and high-quality alpha mattes.
The dataset also contributes to state-of-the-art video portrait matting, implemented by simple video segmentation and a trimap-based image matting model trained on this dataset.
arXiv Detail & Related papers (2025-01-27T15:41:19Z) - PF-LRM: Pose-Free Large Reconstruction Model for Joint Pose and Shape
Prediction [77.89935657608926]
We propose a Pose-Free Large Reconstruction Model (PF-LRM) for reconstructing a 3D object from a few unposed images.
PF-LRM simultaneously estimates the relative camera poses in 1.3 seconds on a single A100 GPU.
arXiv Detail & Related papers (2023-11-20T18:57:55Z) - PP-Matting: High-Accuracy Natural Image Matting [11.68134059283327]
PP-Matting is a trimap-free architecture that can achieve high-accuracy natural image matting.
Our method applies a high-resolution detail branch (HRDB) that extracts fine-grained details of the foreground.
Also, we propose a semantic context branch (SCB) that adopts a semantic segmentation subtask.
arXiv Detail & Related papers (2022-04-20T12:54:06Z) - Rethinking Portrait Matting with Privacy Preserving [79.37601060952201]
We present P3M-10k, the first large-scale anonymized benchmark for Privacy-Preserving Portrait Matting (P3M)
P3M-10k consists of 10,421 high resolution face-blurred portrait images along with high-quality alpha mattes.
We also present a unified matting model dubbed P3M-Net that is compatible with both CNN and transformer backbones.
arXiv Detail & Related papers (2022-03-31T06:26:07Z) - Deep Automatic Natural Image Matting [82.56853587380168]
Automatic image matting (AIM) refers to estimating the soft foreground from an arbitrary natural image without any auxiliary input like trimap.
We propose a novel end-to-end matting network, which can predict a generalized trimap for any image of the above types as a unified semantic representation.
Our network trained on available composite matting datasets outperforms existing methods both objectively and subjectively.
arXiv Detail & Related papers (2021-07-15T10:29:01Z) - Salient Image Matting [0.0]
We propose an image matting framework called Salient Image Matting to estimate the per-pixel opacity value of the most salient foreground in an image.
Our framework simultaneously deals with the challenge of learning a wide range of semantics and salient object types.
Our framework requires only a fraction of expensive matting data as compared to other automatic methods.
arXiv Detail & Related papers (2021-03-23T06:22:33Z) - Portrait Neural Radiance Fields from a Single Image [68.66958204066721]
We present a method for estimating Neural Radiance Fields (NeRF) from a single portrait.
We propose to pretrain the weights of a multilayer perceptron (MLP), which implicitly models the volumetric density.
To improve the generalization to unseen faces, we train the canonical coordinate space approximated by 3D face morphable models.
We quantitatively evaluate the method using controlled captures and demonstrate the generalization to real portrait images, showing favorable results against state-of-the-arts.
arXiv Detail & Related papers (2020-12-10T18:59:59Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.