Has the Virtualization of the Face Changed Facial Perception? A Study of the Impact of Photo Editing and Augmented Reality on Facial Perception
- URL: http://arxiv.org/abs/2303.00612v3
- Date: Fri, 26 Apr 2024 18:49:56 GMT
- Title: Has the Virtualization of the Face Changed Facial Perception? A Study of the Impact of Photo Editing and Augmented Reality on Facial Perception
- Authors: Louisa Conwill, Sam English Anthony, Walter J. Scheirer,
- Abstract summary: We present the results of six surveys on familiarity with different styles of facial filters and ability to discern whether images are filtered.
Our results demonstrate that faces modified with more traditional face filters are perceived similarly to unmodified faces.
We discuss possible explanations for these results, including a societal adjustment to traditional photo editing techniques or the inherent differences in the different types of filters.
- Score: 7.532782211020641
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Augmented reality and other photo editing filters are popular methods used to modify faces online. Considering the important role of facial perception in communication, how do we perceive this increasing number of modified faces? In this paper we present the results of six surveys that measure familiarity with different styles of facial filters, perceived strangeness of faces edited with different filters, and ability to discern whether images are filtered. Our results demonstrate that faces modified with more traditional face filters are perceived similarly to unmodified faces, and faces filtered with augmented reality filters are perceived differently from unmodified faces. We discuss possible explanations for these results, including a societal adjustment to traditional photo editing techniques or the inherent differences in the different types of filters. We conclude with a discussion of how to build online spaces more responsibly based on our results.
Related papers
- FaceFilterSense: A Filter-Resistant Face Recognition and Facial Attribute Analysis Framework [1.673834743879962]
Fun selfie filters have come into tremendous mainstream use affecting the functioning of facial biometric systems.
Current AR-based filters and filters which distort facial key points are in vogue recently and make the faces highly unrecognizable even to the naked eye.
To mitigate these limitations, we aim to perform a holistic impact analysis of the latest filters and propose an user recognition model with the filtered images.
arXiv Detail & Related papers (2024-04-12T07:04:56Z) - Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss
terms [4.814908894876767]
Face swapping algorithms place no emphasis on the eyes, relying on pixel or feature matching losses that consider the entire face to guide the training process.
We propose a novel loss equation for the training of face swapping models, leveraging a pretrained gaze estimation network to directly improve representation of the eyes.
Our findings have implications on face swapping for special effects, as digital avatars, as privacy mechanisms, and more.
arXiv Detail & Related papers (2024-02-05T16:53:54Z) - FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders [81.21440457805932]
We propose a novel framework FaceMAE, where the face privacy and recognition performance are considered simultaneously.
randomly masked face images are used to train the reconstruction module in FaceMAE.
We also perform sufficient privacy-preserving face recognition on several public face datasets.
arXiv Detail & Related papers (2022-05-23T07:19:42Z) - LFW-Beautified: A Dataset of Face Images with Beautification and
Augmented Reality Filters [53.180678723280145]
We contribute with a database of facial images that includes several manipulations.
It includes image enhancement filters (which mostly modify contrast and lightning) and augmented reality filters that incorporate items like animal noses or glasses.
Each dataset contains 4,324 images of size 64 x 64, with a total of 34,592 images.
arXiv Detail & Related papers (2022-03-11T17:05:10Z) - Fun Selfie Filters in Face Recognition: Impact Assessment and Removal [13.715060479044167]
This work investigates the impact of fun selfie filters on face recognition systems.
Ten relevant fun selfie filters are selected to create a database.
To mitigate such unwanted effects, a GAN-based selfie filter removal algorithm is proposed.
arXiv Detail & Related papers (2022-02-12T09:12:31Z) - On the Effect of Selfie Beautification Filters on Face Detection and
Recognition [53.561797148529664]
Social media image filters modify the image contrast or illumination or occlude parts of the face with for example artificial glasses or animal noses.
We develop a method to reconstruct the applied manipulation with a modified version of the U-NET segmentation network.
From a recognition perspective, we employ distance measures and trained machine learning algorithms applied to features extracted using a ResNet-34 network trained to recognize faces.
arXiv Detail & Related papers (2021-10-17T22:10:56Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - A Study of Face Obfuscation in ImageNet [94.2949777826947]
In this paper, we explore image obfuscation in the ImageNet challenge.
Most categories in the ImageNet challenge are not people categories; nevertheless, many incidental people are in the images.
We benchmark various deep neural networks on face-blurred images and observe a disparate impact on different categories.
Results show that features learned on face-blurred images are equally transferable.
arXiv Detail & Related papers (2021-03-10T17:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.