Subjective Face Transform using Human First Impressions
- URL: http://arxiv.org/abs/2309.15381v1
- Date: Wed, 27 Sep 2023 03:21:07 GMT
- Title: Subjective Face Transform using Human First Impressions
- Authors: Chaitanya Roygaga, Joshua Krinsky, Kai Zhang, Kenny Kwok, Aparna
Bharati
- Abstract summary: This work uses generative models to find semantically meaningful edits to a face image that change perceived attributes.
We train on real and synthetic faces, evaluate for in-domain and out-of-domain images using predictive models and human ratings.
- Score: 5.026535087391025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans tend to form quick subjective first impressions of non-physical
attributes when seeing someone's face, such as perceived trustworthiness or
attractiveness. To understand what variations in a face lead to different
subjective impressions, this work uses generative models to find semantically
meaningful edits to a face image that change perceived attributes. Unlike prior
work that relied on statistical manipulation in feature space, our end-to-end
framework considers trade-offs between preserving identity and changing
perceptual attributes. It maps identity-preserving latent space directions to
changes in attribute scores, enabling transformation of any input face along an
attribute axis according to a target change. We train on real and synthetic
faces, evaluate for in-domain and out-of-domain images using predictive models
and human ratings, demonstrating the generalizability of our approach.
Ultimately, such a framework can be used to understand and explain biases in
subjective interpretation of faces that are not dependent on the identity.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - How Do You Perceive My Face? Recognizing Facial Expressions in Multi-Modal Context by Modeling Mental Representations [5.895694050664867]
We introduce a novel approach for facial expression classification that goes beyond simple classification tasks.
Our model accurately classifies a perceived face and synthesizes the corresponding mental representation perceived by a human when observing a face in context.
We evaluate synthesized expressions in a human study, showing that our model effectively produces approximations of human mental representations.
arXiv Detail & Related papers (2024-09-04T09:32:40Z) - When StyleGAN Meets Stable Diffusion: a $\mathscr{W}_+$ Adapter for
Personalized Image Generation [60.305112612629465]
Text-to-image diffusion models have excelled in producing diverse, high-quality, and photo-realistic images.
We present a novel use of the extended StyleGAN embedding space $mathcalW_+$ to achieve enhanced identity preservation and disentanglement for diffusion models.
Our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions.
arXiv Detail & Related papers (2023-11-29T09:05:14Z) - Face Identity-Aware Disentanglement in StyleGAN [15.753131748318335]
We introduce PluGeN4Faces, a plugin to StyleGAN, which explicitly disentangles face attributes from a person's identity.
Our experiments demonstrate that the modifications of face attributes performed by PluGeN4Faces are significantly less invasive on the remaining characteristics of the image than in the existing state-of-the-art models.
arXiv Detail & Related papers (2023-09-21T12:54:09Z) - Explaining Bias in Deep Face Recognition via Image Characteristics [9.569575076277523]
We evaluate ten state-of-the-art face recognition models, comparing their fairness in terms of security and usability on two data sets.
We then analyze the impact of image characteristics on models performance.
arXiv Detail & Related papers (2022-08-23T17:18:23Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - TransFA: Transformer-based Representation for Face Attribute Evaluation [87.09529826340304]
We propose a novel textbftransformer-based representation for textbfattribute evaluation method (textbfTransFA)
The proposed TransFA achieves superior performances compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-07-12T10:58:06Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Face Age Progression With Attribute Manipulation [11.859913430860335]
We propose a novel holistic model in this regard viz., Face Age progression With Attribute Manipulation (FAWAM)"
We address the task in a bottom-up manner, as two submodules i.e. face age progression and face attribute manipulation.
For face aging, we use an attribute-conscious face aging model with a pyramidal generative adversarial network that can model age-specific facial changes.
arXiv Detail & Related papers (2021-06-14T18:26:48Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Exploring Racial Bias within Face Recognition via per-subject
Adversarially-Enabled Data Augmentation [15.924281804465252]
We propose a novel adversarial derived data augmentation methodology that aims to enable dataset balance at a per-subject level.
Our aim is to automatically construct a synthesised dataset by transforming facial images across varying racial domains.
In a side-by-side comparison, we show the positive impact our proposed technique can have on the recognition performance for (racial) minority groups.
arXiv Detail & Related papers (2020-04-19T19:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.