FaceEraser: Removing Facial Parts for Augmented Reality
- URL: http://arxiv.org/abs/2109.10760v1
- Date: Wed, 22 Sep 2021 14:30:12 GMT
- Title: FaceEraser: Removing Facial Parts for Augmented Reality
- Authors: Miao Hua, Lijie Liu, Ziyang Cheng, Qian He, Bingchuan Li, Zili Yi
- Abstract summary: Our task is to remove all facial parts and then impose visual elements onto the blank'' face for augmented reality.
We propose a novel data generation technique to produce paired training data that well mimic the blank'' faces.
Our method has been integrated into commercial products and its effectiveness has been verified with unconstrained user inputs.
- Score: 10.575917056215289
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Our task is to remove all facial parts (e.g., eyebrows, eyes, mouth and
nose), and then impose visual elements onto the ``blank'' face for augmented
reality. Conventional object removal methods rely on image inpainting
techniques (e.g., EdgeConnect, HiFill) that are trained in a self-supervised
manner with randomly manipulated image pairs. Specifically, given a set of
natural images, randomly masked images are used as inputs and the raw images
are treated as ground truths. Whereas, this technique does not satisfy the
requirements of facial parts removal, as it is hard to obtain ``ground-truth''
images with real ``blank'' faces. To address this issue, we propose a novel
data generation technique to produce paired training data that well mimic the
``blank'' faces. In the mean time, we propose a novel network architecture for
improved inpainting quality for our task. Finally, we demonstrate various
face-oriented augmented reality applications on top of our facial parts removal
model. Our method has been integrated into commercial products and its
effectiveness has been verified with unconstrained user inputs. The source
codes, pre-trained models and training data will be released for research
purposes.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Single Image, Any Face: Generalisable 3D Face Generation [59.9369171926757]
We propose a novel model, Gen3D-Face, which generates 3D human faces with unconstrained single image input.
To the best of our knowledge, this is the first attempt and benchmark for creating photorealistic 3D human face avatars from single images.
arXiv Detail & Related papers (2024-09-25T14:56:37Z) - GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - ComFace: Facial Representation Learning with Synthetic Data for Comparing Faces [5.07975834105566]
We propose a facial representation learning method using synthetic images for comparing faces, called ComFace.
For effective representation learning, ComFace aims to acquire two feature representations, i.e., inter-personal facial differences and intra-personal facial changes.
Our ComFace, trained using only synthetic data, achieves comparable to or better transfer performance than general pre-training and state-of-the-art representation learning methods trained using real images.
arXiv Detail & Related papers (2024-05-25T02:44:07Z) - DiffusionFace: Towards a Comprehensive Dataset for Diffusion-Based Face Forgery Analysis [71.40724659748787]
DiffusionFace is the first diffusion-based face forgery dataset.
It covers various forgery categories, including unconditional and Text Guide facial image generation, Img2Img, Inpaint, and Diffusion-based facial exchange algorithms.
It provides essential metadata and a real-world internet-sourced forgery facial image dataset for evaluation.
arXiv Detail & Related papers (2024-03-27T11:32:44Z) - FaceChain: A Playground for Human-centric Artificial Intelligence
Generated Content [36.48960592782015]
FaceChain is a personalized portrait generation framework that combines a series of customized image-generation model and a rich set of face-related perceptual understanding models.
We inject several SOTA face models into the generation procedure, achieving a more efficient label-tagging, data-processing, and model post-processing compared to previous solutions.
Based on FaceChain, we further develop several applications to build a broader playground for better showing its value, including virtual try on and 2D talking head.
arXiv Detail & Related papers (2023-08-28T02:20:44Z) - A survey on facial image deblurring [3.6775758132528877]
When the facial image is blurred, it has a great impact on high-level vision tasks such as face recognition.
This paper surveys and summarizes recently published methods for facial image deblurring, most of which are based on deep learning.
We show the performance of classical methods on datasets and metrics and give a brief discussion on the differences of model-based and learning-based methods.
arXiv Detail & Related papers (2023-02-10T02:24:56Z) - DigiFace-1M: 1 Million Digital Face Images for Face Recognition [25.31469201712699]
State-of-the-art face recognition models show impressive accuracy, achieving over 99.8% on Labeled Faces in the Wild dataset.
We introduce a large-scale synthetic dataset for face recognition, obtained by rendering digital faces using a computer graphics pipeline.
arXiv Detail & Related papers (2022-10-05T22:02:48Z) - Learning Inverse Rendering of Faces from Real-world Videos [52.313931830408386]
Existing methods decompose a face image into three components (albedo, normal, and illumination) by supervised training on synthetic data.
We propose a weakly supervised training approach to train our model on real face videos, based on the assumption of consistency of albedo and normal.
Our network is trained on both real and synthetic data, benefiting from both.
arXiv Detail & Related papers (2020-03-26T17:26:40Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.