Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss
terms
- URL: http://arxiv.org/abs/2402.03188v1
- Date: Mon, 5 Feb 2024 16:53:54 GMT
- Title: Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss
terms
- Authors: Ethan Wilson, Frederick Shic, Sophie J\"org, Eakta Jain
- Abstract summary: Face swapping algorithms place no emphasis on the eyes, relying on pixel or feature matching losses that consider the entire face to guide the training process.
We propose a novel loss equation for the training of face swapping models, leveraging a pretrained gaze estimation network to directly improve representation of the eyes.
Our findings have implications on face swapping for special effects, as digital avatars, as privacy mechanisms, and more.
- Score: 4.814908894876767
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Advances in face swapping have enabled the automatic generation of highly
realistic faces. Yet face swaps are perceived differently than when looking at
real faces, with key differences in viewer behavior surrounding the eyes. Face
swapping algorithms generally place no emphasis on the eyes, relying on pixel
or feature matching losses that consider the entire face to guide the training
process. We further investigate viewer perception of face swaps, focusing our
analysis on the presence of an uncanny valley effect. We additionally propose a
novel loss equation for the training of face swapping models, leveraging a
pretrained gaze estimation network to directly improve representation of the
eyes. We confirm that viewed face swaps do elicit uncanny responses from
viewers. Our proposed improvements significant reduce viewing angle errors
between face swaps and their source material. Our method additionally reduces
the prevalence of the eyes as a deciding factor when viewers perform deepfake
detection tasks. Our findings have implications on face swapping for special
effects, as digital avatars, as privacy mechanisms, and more; negative
responses from users could limit effectiveness in said applications. Our gaze
improvements are a first step towards alleviating negative viewer perceptions
via a targeted approach.
Related papers
- SymFace: Additional Facial Symmetry Loss for Deep Face Recognition [1.5612101323427952]
This research examines the natural phenomenon of facial symmetry in the face verification problem.
We show that the two output embedding vectors of split faces must project close to each other in the output embedding space.
Inspired by this concept, we penalize the network based on the disparity of embedding of the symmetrical pair of split faces.
arXiv Detail & Related papers (2024-09-18T09:06:55Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Introducing Explicit Gaze Constraints to Face Swapping [1.9386396954290932]
Face swapping combines one face's identity with another face's non-appearance attributes to generate a synthetic face.
Image-based loss metrics that consider the full face do not effectively capture the perceptually important, yet spatially small, eye regions.
We propose a novel loss function that leverages gaze prediction to inform the face swap model during training and compare against existing methods.
arXiv Detail & Related papers (2023-05-25T15:12:08Z) - What makes you, you? Analyzing Recognition by Swapping Face Parts [25.96441722307888]
We propose to swap facial parts as a way to disentangle the recognition relevance of different face parts, like eyes, nose and mouth.
In our method, swapping parts from a source face to a target one is performed by fitting a 3D prior, which establishes dense pixels correspondence between parts.
Seamless cloning is then used to obtain smooth transitions between the mapped source regions and the shape and skin tone of the target face.
arXiv Detail & Related papers (2022-06-23T14:59:18Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - A new face swap method for image and video domains: a technical report [60.47144478048589]
We introduce a new face swap pipeline that is based on FaceShifter architecture.
New eye loss function, super-resolution block, and Gaussian-based face mask generation leads to improvements in quality.
arXiv Detail & Related papers (2022-02-07T10:15:50Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - It's Written All Over Your Face: Full-Face Appearance-Based Gaze
Estimation [82.16380486281108]
We propose an appearance-based method that only takes the full face image as input.
Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps.
We show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation.
arXiv Detail & Related papers (2016-11-27T15:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.