Bayesian optimization for automatic design of face stimuli
- URL: http://arxiv.org/abs/2007.09989v1
- Date: Mon, 20 Jul 2020 10:27:18 GMT
- Title: Bayesian optimization for automatic design of face stimuli
- Authors: Pedro F. da Costa, Romy Lorenz, Ricardo Pio Monti, Emily Jones, Robert
Leech
- Abstract summary: We propose a novel framework which combines generative networks (GANs) with Bayesian optimization to identify individual response patterns to many different faces.
Formally, we employ Bayesian optimization to efficiently search the latent space of state-of-the-art GAN models, with the aim to automatically generate novel faces.
We show how the algorithm can efficiently locate an individual's optimal face while mapping out their response across different semantic transformations of a face.
- Score: 2.572404739180802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Investigating the cognitive and neural mechanisms involved with face
processing is a fundamental task in modern neuroscience and psychology. To
date, the majority of such studies have focused on the use of pre-selected
stimuli. The absence of personalized stimuli presents a serious limitation as
it fails to account for how each individual face processing system is tuned to
cultural embeddings or how it is disrupted in disease. In this work, we propose
a novel framework which combines generative adversarial networks (GANs) with
Bayesian optimization to identify individual response patterns to many
different faces. Formally, we employ Bayesian optimization to efficiently
search the latent space of state-of-the-art GAN models, with the aim to
automatically generate novel faces, to maximize an individual subject's
response. We present results from a web-based proof-of-principle study, where
participants rated images of themselves generated via performing Bayesian
optimization over the latent space of a GAN. We show how the algorithm can
efficiently locate an individual's optimal face while mapping out their
response across different semantic transformations of a face; inter-individual
analyses suggest how the approach can provide rich information about individual
differences in face processing.
Related papers
- Diverse Code Query Learning for Speech-Driven Facial Animation [2.1779479916071067]
Speech-driven facial animation aims to synthesize lip-synchronized 3D talking faces following the given speech signal.
We propose predicting multiple samples conditioned on the same audio signal and then explicitly encouraging sample diversity to address diverse facial animation.
arXiv Detail & Related papers (2024-09-27T21:15:21Z) - Appearance Debiased Gaze Estimation via Stochastic Subject-Wise
Adversarial Learning [33.55397868171977]
Appearance-based gaze estimation has been attracting attention in computer vision, and remarkable improvements have been achieved using various deep learning techniques.
We propose a novel framework: subject-wise gaZE learning (SAZE), which trains a network to generalize the appearance of subjects.
Our experimental results verify the robustness of the method in that it yields state-of-the-art performance, achieving 3.89 and 4.42 on the MPIIGaze and EyeDiap datasets, respectively.
arXiv Detail & Related papers (2024-01-25T00:23:21Z) - Multimodal Adaptive Fusion of Face and Gait Features using Keyless
attention based Deep Neural Networks for Human Identification [67.64124512185087]
Soft biometrics such as gait are widely used with face in surveillance tasks like person recognition and re-identification.
We propose a novel adaptive multi-biometric fusion strategy for the dynamic incorporation of gait and face biometric cues by leveraging keyless attention deep neural networks.
arXiv Detail & Related papers (2023-03-24T05:28:35Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - TANet: A new Paradigm for Global Face Super-resolution via
Transformer-CNN Aggregation Network [72.41798177302175]
We propose a novel paradigm based on the self-attention mechanism (i.e., the core of Transformer) to fully explore the representation capacity of the facial structure feature.
Specifically, we design a Transformer-CNN aggregation network (TANet) consisting of two paths, in which one path uses CNNs responsible for restoring fine-grained facial details.
By aggregating the features from the above two paths, the consistency of global facial structure and fidelity of local facial detail restoration are strengthened simultaneously.
arXiv Detail & Related papers (2021-09-16T18:15:07Z) - One-shot Face Reenactment Using Appearance Adaptive Normalization [30.615671641713945]
The paper proposes a novel generative adversarial network for one-shot face reenactment.
It can animate a single face image to a different pose-and-expression while keeping its original appearance.
arXiv Detail & Related papers (2021-02-08T03:36:30Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.