What makes you, you? Analyzing Recognition by Swapping Face Parts
- URL: http://arxiv.org/abs/2206.11759v1
- Date: Thu, 23 Jun 2022 14:59:18 GMT
- Title: What makes you, you? Analyzing Recognition by Swapping Face Parts
- Authors: Claudio Ferrari, Matteo Serpentoni, Stefano Berretti, Alberto Del
Bimbo
- Abstract summary: We propose to swap facial parts as a way to disentangle the recognition relevance of different face parts, like eyes, nose and mouth.
In our method, swapping parts from a source face to a target one is performed by fitting a 3D prior, which establishes dense pixels correspondence between parts.
Seamless cloning is then used to obtain smooth transitions between the mapped source regions and the shape and skin tone of the target face.
- Score: 25.96441722307888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning advanced face recognition to an unprecedented accuracy.
However, understanding how local parts of the face affect the overall
recognition performance is still mostly unclear. Among others, face swap has
been experimented to this end, but just for the entire face. In this paper, we
propose to swap facial parts as a way to disentangle the recognition relevance
of different face parts, like eyes, nose and mouth. In our method, swapping
parts from a source face to a target one is performed by fitting a 3D prior,
which establishes dense pixels correspondence between parts, while also
handling pose differences. Seamless cloning is then used to obtain smooth
transitions between the mapped source regions and the shape and skin tone of
the target face. We devised an experimental protocol that allowed us to draw
some preliminary conclusions when the swapped images are classified by deep
networks, indicating a prominence of the eyes and eyebrows region. Code
available at https://github.com/clferrari/FacePartsSwap
Related papers
- Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss
terms [4.814908894876767]
Face swapping algorithms place no emphasis on the eyes, relying on pixel or feature matching losses that consider the entire face to guide the training process.
We propose a novel loss equation for the training of face swapping models, leveraging a pretrained gaze estimation network to directly improve representation of the eyes.
Our findings have implications on face swapping for special effects, as digital avatars, as privacy mechanisms, and more.
arXiv Detail & Related papers (2024-02-05T16:53:54Z) - ReliableSwap: Boosting General Face Swapping Via Reliable Supervision [9.725105108879717]
This paper proposes to construct reliable supervision, dubbed cycle triplets, which serves as the image-level guidance when the source identity differs from the target one during training.
Specifically, we use face reenactment and blending techniques to synthesize the swapped face from real images in advance.
Our face swapping framework, named ReliableSwap, can boost the performance of any existing face swapping network with negligible overhead.
arXiv Detail & Related papers (2023-06-08T17:01:14Z) - Introducing Explicit Gaze Constraints to Face Swapping [1.9386396954290932]
Face swapping combines one face's identity with another face's non-appearance attributes to generate a synthetic face.
Image-based loss metrics that consider the full face do not effectively capture the perceptually important, yet spatially small, eye regions.
We propose a novel loss function that leverages gaze prediction to inform the face swap model during training and compare against existing methods.
arXiv Detail & Related papers (2023-05-25T15:12:08Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - FlowFace: Semantic Flow-guided Shape-aware Face Swapping [43.166181219154936]
We propose a semantic flow-guided two-stage framework for shape-aware face swapping, namely FlowFace.
Our FlowFace consists of a face reshaping network and a face swapping network.
We employ a pre-trained face masked autoencoder to extract facial features from both the source face and the target face.
arXiv Detail & Related papers (2022-12-06T07:23:39Z) - Human Face Recognition from Part of a Facial Image based on Image
Stitching [0.0]
Most of the current techniques for face recognition require the presence of a full face of the person to be recognized.
In this work, we adopted the process of stitching the face by completing the missing part with the flipping of the part shown in the picture.
The selected face recognition algorithms that are applied here are Eigenfaces and geometrical methods.
arXiv Detail & Related papers (2022-03-10T19:31:57Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - Robust Face-Swap Detection Based on 3D Facial Shape Information [59.32489266682952]
Face-swap images and videos have attracted more and more malicious attackers to discredit some key figures.
Previous pixel-level artifacts based detection techniques always focus on some unclear patterns but ignore some available semantic clues.
We propose a biometric information based method to fully exploit the appearance and shape feature for face-swap detection of key figures.
arXiv Detail & Related papers (2021-04-28T09:35:48Z) - Face Forgery Detection by 3D Decomposition [72.22610063489248]
We consider a face image as the production of the intervention of the underlying 3D geometry and the lighting environment.
By disentangling the face image into 3D shape, common texture, identity texture, ambient light, and direct light, we find the devil lies in the direct light and the identity texture.
We propose to utilize facial detail, which is the combination of direct light and identity texture, as the clue to detect the subtle forgery patterns.
arXiv Detail & Related papers (2020-11-19T09:25:44Z) - DeepFake Detection Based on the Discrepancy Between the Face and its
Context [94.47879216590813]
We propose a method for detecting face swapping and other identity manipulations in single images.
Our approach involves two networks: (i) a face identification network that considers the face region bounded by a tight semantic segmentation, and (ii) a context recognition network that considers the face context.
We describe a method which uses the recognition signals from our two networks to detect such discrepancies.
Our method achieves state of the art results on the FaceForensics++, Celeb-DF-v2, and DFDC benchmarks for face manipulation detection, and even generalizes to detect fakes produced by unseen methods.
arXiv Detail & Related papers (2020-08-27T17:04:46Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.