Do All Asians Look the Same?: A Comparative Analysis of the East Asian
Facial Color Desires using Instagram
- URL: http://arxiv.org/abs/2304.03132v1
- Date: Thu, 6 Apr 2023 15:05:56 GMT
- Title: Do All Asians Look the Same?: A Comparative Analysis of the East Asian
Facial Color Desires using Instagram
- Authors: Jaeyoun You, Sojeong Park, Seok-Kyeong Hong, Bongwon Suh
- Abstract summary: This study uses selfie data to examine how peoples' desires for ideal facial representations vary by region.
We refute the "all Asians prefer identical visuals," which is a subset of the prevalent Western belief that "all Asians look the same"
We propose a strategy for resolving the mismatch between real-world desires and the Western beauty market's views.
- Score: 7.927093463287226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Selfies represent people's desires, and social media platforms like Instagram
have been flooded with them. This study uses selfie data to examine how
peoples' desires for ideal facial representations vary by region, particularly
in East Asia. Through the analysis, we aim to refute the "all Asians prefer
identical visuals," which is a subset of the prevalent Western belief that "all
Asians look the same." Our findings, reinforced by postcolonial
interpretations, dispute those assumptions. We propose a strategy for resolving
the mismatch between real-world desires and the Western beauty market's views.
We expect the disparity between hegemonic color schemes and the augmented skin
colors shown by our results may facilitate the study of color and Asian
identity.
Related papers
- Inclusive content reduces racial and gender biases, yet non-inclusive content dominates popular culture [1.4204016278692333]
We use state-of-the-art machine learning models to classify over 300,000 images spanning over five decades.
We find that racial minorities appear far less frequently than their White counterparts, and when they do appear, they are portrayed less prominently.
We also find that women are more likely to be portrayed with their full bodies, whereas men are more frequently presented with their faces.
arXiv Detail & Related papers (2024-05-10T11:34:47Z) - AI-generated faces influence gender stereotypes and racial homogenization [1.6647208383676708]
We document significant biases in Stable Diffusion across six races, two genders, 32 professions, and eight attributes.
This analysis reveals significant racial homogenization depicting nearly all Middle Eastern men as bearded, brown-skinned, and wearing traditional attire.
We propose debiasing solutions that allow users to specify the desired distributions of race and gender when generating images.
arXiv Detail & Related papers (2024-02-01T20:32:14Z) - Studying Bias in GANs through the Lens of Race [91.95264864405493]
We study how the performance and evaluation of generative image models are impacted by the racial composition of their training datasets.
Our results show that the racial compositions of generated images successfully preserve that of the training data.
However, we observe that truncation, a technique used to generate higher quality images during inference, exacerbates racial imbalances in the data.
arXiv Detail & Related papers (2022-09-06T22:25:56Z) - American == White in Multimodal Language-and-Image AI [3.4157048274143316]
Three state-of-the-art language-and-image AI models are evaluated.
We show that White individuals are more associated with collective in-group words than are Asian, Black, or Latina/o individuals.
The results indicate that biases equating American identity with being White are learned by language-and-image AI.
arXiv Detail & Related papers (2022-07-01T23:45:56Z) - Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation [48.632358823108326]
Virtual facial avatars will play an increasingly important role in immersive communication, games and the metaverse.
Virtual facial avatars will play an increasingly important role in immersive communication, games and the metaverse.
This requires accurate recovery of the appearance, represented by albedo, regardless of age, sex, or ethnicity.
arXiv Detail & Related papers (2022-05-08T22:01:30Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - Black or White but never neutral: How readers perceive identity from
yellow or skin-toned emoji [90.14874935843544]
Recent work established a connection between expression of identity and emoji usage on social media.
This work asks if, as with language, readers are sensitive to such acts of self-expression and use them to understand the identity of authors.
arXiv Detail & Related papers (2021-05-12T18:23:51Z) - Country Image in COVID-19 Pandemic: A Case Study of China [79.17323278601869]
Country image has a profound influence on international relations and economic development.
In the worldwide outbreak of COVID-19, countries and their people display different reactions.
In this study, we take China as a specific and typical case and investigate its image with aspect-based sentiment analysis on a large-scale Twitter dataset.
arXiv Detail & Related papers (2020-09-12T15:54:51Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - Detecting East Asian Prejudice on Social Media [10.647940201343575]
We report on the creation of a classifier that detects and categorizes social media posts from Twitter into four classes: Hostility against East Asia, Criticism of East Asia, Meta-discussions of East Asian prejudice and a neutral class.
arXiv Detail & Related papers (2020-05-08T08:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.