Rethinking Portrait Matting with Privacy Preserving
- URL: http://arxiv.org/abs/2203.16828v2
- Date: Mon, 17 Apr 2023 00:19:30 GMT
- Title: Rethinking Portrait Matting with Privacy Preserving
- Authors: Sihan Ma, Jizhizi Li, Jing Zhang, He Zhang, Dacheng Tao
- Abstract summary: We present P3M-10k, the first large-scale anonymized benchmark for Privacy-Preserving Portrait Matting (P3M)
P3M-10k consists of 10,421 high resolution face-blurred portrait images along with high-quality alpha mattes.
We also present a unified matting model dubbed P3M-Net that is compatible with both CNN and transformer backbones.
- Score: 79.37601060952201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been an increasing concern about the privacy issue raised
by identifiable information in machine learning. However, previous portrait
matting methods were all based on identifiable images. To fill the gap, we
present P3M-10k, which is the first large-scale anonymized benchmark for
Privacy-Preserving Portrait Matting (P3M). P3M-10k consists of 10,421 high
resolution face-blurred portrait images along with high-quality alpha mattes,
which enables us to systematically evaluate both trimap-free and trimap-based
matting methods and obtain some useful findings about model generalization
ability under the privacy preserving training (PPT) setting. We also present a
unified matting model dubbed P3M-Net that is compatible with both CNN and
transformer backbones. To further mitigate the cross-domain performance gap
issue under the PPT setting, we devise a simple yet effective Copy and Paste
strategy (P3M-CP), which borrows facial information from public celebrity
images and directs the network to reacquire the face context at both data and
feature level. Extensive experiments on P3M-10k and public benchmarks
demonstrate the superiority of P3M-Net over state-of-the-art methods and the
effectiveness of P3M-CP in improving the cross-domain generalization ability,
implying a great significance of P3M for future research and real-world
applications.
Related papers
- PF-LRM: Pose-Free Large Reconstruction Model for Joint Pose and Shape
Prediction [77.89935657608926]
We propose a Pose-Free Large Reconstruction Model (PF-LRM) for reconstructing a 3D object from a few unposed images.
PF-LRM simultaneously estimates the relative camera poses in 1.3 seconds on a single A100 GPU.
arXiv Detail & Related papers (2023-11-20T18:57:55Z) - Learning Pixel-Adaptive Weights for Portrait Photo Retouching [1.9843222704723809]
Portrait photo retouching is a photo retouching task that emphasizes human-region priority and group-level consistency.
In this paper, we model local context cues to improve the retouching quality explicitly.
Experiments on PPR10K dataset verify the effectiveness of our method.
arXiv Detail & Related papers (2021-12-07T07:23:42Z) - Facial Depth and Normal Estimation using Single Dual-Pixel Camera [81.02680586859105]
We introduce a DP-oriented Depth/Normal network that reconstructs the 3D facial geometry.
It contains the corresponding ground-truth 3D models including depth map and surface normal in metric scale.
It achieves state-of-the-art performances over recent DP-based depth/normal estimation methods.
arXiv Detail & Related papers (2021-11-25T05:59:27Z) - Direct Multi-view Multi-person 3D Pose Estimation [138.48139701871213]
We present Multi-view Pose transformer (MvP) for estimating multi-person 3D poses from multi-view images.
MvP directly regresses the multi-person 3D poses in a clean and efficient way, without relying on intermediate tasks.
We show experimentally that our MvP model outperforms the state-of-the-art methods on several benchmarks while being much more efficient.
arXiv Detail & Related papers (2021-11-07T13:09:20Z) - Highly Efficient Natural Image Matting [15.977598189574659]
We propose a trimap-free natural image matting method with a lightweight model.
We construct an extremely light-weighted model, which achieves comparable performance with 1% (344k) of large models on popular natural image benchmarks.
arXiv Detail & Related papers (2021-10-25T09:23:46Z) - Privacy-Preserving Portrait Matting [73.98225485513905]
We present P3M-10k, the first large-scale anonymized benchmark for Privacy-Preserving Portrait Matting.
P3M-10k consists of 10,000 high-resolution face-blurred portrait images along with high-quality alpha mattes.
We propose P3M-Net, which leverages the power of a unified framework for both semantic perception and detail matting.
arXiv Detail & Related papers (2021-04-29T09:20:19Z) - Inducing Predictive Uncertainty Estimation for Face Recognition [102.58180557181643]
We propose a method for generating image quality training data automatically from'mated-pairs' of face images.
We use the generated data to train a lightweight Predictive Confidence Network, termed as PCNet, for estimating the confidence score of a face image.
arXiv Detail & Related papers (2020-09-01T17:52:00Z) - Plug-and-Play Rescaling Based Crowd Counting in Static Images [24.150701096083242]
We propose a new image patch rescaling module (PRM) and three independent PRM employed crowd counting methods.
The proposed frameworks use the PRM module to rescale the image regions (patches) that require special treatment, whereas the classification process helps in recognizing and discarding any cluttered crowd-like background regions which may result in overestimation.
arXiv Detail & Related papers (2020-01-06T21:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.