Can gender categorization influence the perception of animated virtual
humans?
- URL: http://arxiv.org/abs/2208.02386v1
- Date: Wed, 3 Aug 2022 23:45:49 GMT
- Title: Can gender categorization influence the perception of animated virtual
humans?
- Authors: V. Araujo, D. Schaffer, A. B. Costa, S. R. Musse
- Abstract summary: We reproduce, through CG, a perceptual study that aims to assess gender bias in relation to a simulated baby.
The results of our study with virtual babies were similar to the findings with real babies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Animations have become increasingly realistic with the evolution of Computer
Graphics (CG). In particular, human models and behaviors were represented
through animated virtual humans, sometimes with a high level of realism. In
particular, gender is a characteristic that is related to human identification,
so that virtual humans assigned to a specific gender have, in general,
stereotyped representations through movements, clothes, hair and colors, in
order to be understood by users as desired by designers. An important area of
study is finding out whether participants' perceptions change depending on how
a virtual human is visually presented. Findings in this area can help the
industry to guide the modeling and animation of virtual humans to deliver the
expected impact to the audience. In this paper, we reproduce, through CG, a
perceptual study that aims to assess gender bias in relation to a simulated
baby. In the original study, two groups of people watched the same video of a
baby reacting to the same stimuli, but one group was told the baby was female
and the other group was told the same baby was male, producing different
perceptions. The results of our study with virtual babies were similar to the
findings with real babies. First, it shows that people's emotional response
change depending on the character gender attribute, in this case the only
difference was the baby's name. Our research indicates that by just informing
the name of a virtual human can be enough to create a gender perception that
impact the participant emotional answer.
Related papers
- Discovering Hidden Visual Concepts Beyond Linguistic Input in Infant Learning [18.43931715859825]
As computer vision seeks to replicate the human vision system, understanding infant visual development may offer valuable insights.
In this paper, we present an interdisciplinary study exploring this question.
Can a computational model that imitates the infant learning process develop broader visual concepts similar to how infants naturally learn?
Our work bridges cognitive science and computer vision by analyzing the internal representations of a computational model trained on an infant visual and linguistic inputs.
arXiv Detail & Related papers (2025-01-09T12:55:55Z) - Mitigation of gender bias in automatic facial non-verbal behaviors generation [0.45088680687810573]
We introduce a classifier capable of discerning the gender of a speaker from their non-verbal cues.
We present a new model, FairGenderGen, which integrates a gender discriminator and a gradient reversal layer into our previous behavior generation model.
arXiv Detail & Related papers (2024-10-09T06:41:24Z) - SynPlay: Importing Real-world Diversity for a Synthetic Human Dataset [19.32308498024933]
We introduce Synthetic Playground (SynPlay), a new synthetic human dataset that aims to bring out the diversity of human appearance in the real world.
We focus on two factors to achieve a level of diversity that has not yet been seen in previous works: realistic human motions and poses.
We show that using SynPlay in model training leads to enhanced accuracy over existing synthetic datasets for human detection and segmentation.
arXiv Detail & Related papers (2024-08-21T17:58:49Z) - Can we truly transfer an actor's genuine happiness to avatars? An
investigation into virtual, real, posed and spontaneous faces [0.7182245711235297]
This study aims to evaluate Ekman's action units in datasets of real human faces, posed and spontaneous, and virtual human faces.
We also conducted a case study with specific movie characters, such as SheHulk and Genius.
This investigation can help several areas of knowledge, whether using real or virtual human beings, in education, health, entertainment, games, security, and even legal matters.
arXiv Detail & Related papers (2023-12-04T18:53:42Z) - Do humans and Convolutional Neural Networks attend to similar areas
during scene classification: Effects of task and image type [0.0]
We investigated how the tasks used to elicit human attention maps interact with image characteristics in modulating the similarity between humans and CNN.
We varied the type of image to be categorized, using either singular, salient objects, indoor scenes consisting of object arrangements, or landscapes without distinct objects defining the category.
The influence of human tasks strongly depended on image type: For objects, human manual selection produced maps that were most similar to CNN, while the specific eye movement task has little impact.
arXiv Detail & Related papers (2023-07-25T09:02:29Z) - Compositional 3D Human-Object Neural Animation [93.38239238988719]
Human-object interactions (HOIs) are crucial for human-centric scene understanding applications such as human-centric visual generation, AR/VR, and robotics.
In this paper, we address this challenge in HOI animation from a compositional perspective.
We adopt neural human-object deformation to model and render HOI dynamics based on implicit neural representations.
arXiv Detail & Related papers (2023-04-27T10:04:56Z) - Contrastive Language-Vision AI Models Pretrained on Web-Scraped
Multimodal Data Exhibit Sexual Objectification Bias [11.6727088473067]
We show that language-vision AI models trained on web scrapes learn biases of sexual objectification.
Images of female professionals are likely to be associated with sexual descriptions relative to images of male professionals.
arXiv Detail & Related papers (2022-12-21T18:54:19Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - Comparing Visual Reasoning in Humans and AI [66.89451296340809]
We created a dataset of complex scenes that contained human behaviors and social interactions.
We used a quantitative metric of similarity between scene descriptions of the AI/human and ground truth of five other human descriptions of each scene.
Results show that the machine/human agreement scene descriptions are much lower than human/human agreement for our complex scenes.
arXiv Detail & Related papers (2021-04-29T04:44:13Z) - Gaze Perception in Humans and CNN-Based Model [66.89451296340809]
We compare how a CNN (convolutional neural network) based model of gaze and humans infer the locus of attention in images of real-world scenes.
We show that compared to the model, humans' estimates of the locus of attention are more influenced by the context of the scene.
arXiv Detail & Related papers (2021-04-17T04:52:46Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.