Introducing Explicit Gaze Constraints to Face Swapping
- URL: http://arxiv.org/abs/2305.16138v1
- Date: Thu, 25 May 2023 15:12:08 GMT
- Title: Introducing Explicit Gaze Constraints to Face Swapping
- Authors: Ethan Wilson, Frederick Shic, Eakta Jain
- Abstract summary: Face swapping combines one face's identity with another face's non-appearance attributes to generate a synthetic face.
Image-based loss metrics that consider the full face do not effectively capture the perceptually important, yet spatially small, eye regions.
We propose a novel loss function that leverages gaze prediction to inform the face swap model during training and compare against existing methods.
- Score: 1.9386396954290932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face swapping combines one face's identity with another face's non-appearance
attributes (expression, head pose, lighting) to generate a synthetic face. This
technology is rapidly improving, but falls flat when reconstructing some
attributes, particularly gaze. Image-based loss metrics that consider the full
face do not effectively capture the perceptually important, yet spatially
small, eye regions. Improving gaze in face swaps can improve naturalness and
realism, benefiting applications in entertainment, human computer interaction,
and more. Improved gaze will also directly improve Deepfake detection efforts,
serving as ideal training data for classifiers that rely on gaze for
classification. We propose a novel loss function that leverages gaze prediction
to inform the face swap model during training and compare against existing
methods. We find all methods to significantly benefit gaze in resulting face
swaps.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss
terms [4.814908894876767]
Face swapping algorithms place no emphasis on the eyes, relying on pixel or feature matching losses that consider the entire face to guide the training process.
We propose a novel loss equation for the training of face swapping models, leveraging a pretrained gaze estimation network to directly improve representation of the eyes.
Our findings have implications on face swapping for special effects, as digital avatars, as privacy mechanisms, and more.
arXiv Detail & Related papers (2024-02-05T16:53:54Z) - High-Fidelity Face Swapping with Style Blending [16.024260677867076]
We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
arXiv Detail & Related papers (2023-12-17T23:22:37Z) - ReliableSwap: Boosting General Face Swapping Via Reliable Supervision [9.725105108879717]
This paper proposes to construct reliable supervision, dubbed cycle triplets, which serves as the image-level guidance when the source identity differs from the target one during training.
Specifically, we use face reenactment and blending techniques to synthesize the swapped face from real images in advance.
Our face swapping framework, named ReliableSwap, can boost the performance of any existing face swapping network with negligible overhead.
arXiv Detail & Related papers (2023-06-08T17:01:14Z) - Learning Representations for Masked Facial Recovery [8.124282476398843]
pandemic of these recent years has led to a dramatic increase in people wearing protective masks in public venues.
One way to address the problem is to revert to face recovery methods as a preprocessing step.
We introduce a method that is specific for the recovery of the face image from an image of the same individual wearing a mask.
arXiv Detail & Related papers (2022-12-28T22:22:15Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z) - FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping [43.236261887752065]
We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
arXiv Detail & Related papers (2019-12-31T17:57:46Z) - It's Written All Over Your Face: Full-Face Appearance-Based Gaze
Estimation [82.16380486281108]
We propose an appearance-based method that only takes the full face image as input.
Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps.
We show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation.
arXiv Detail & Related papers (2016-11-27T15:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.