"My Kind of Woman": Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law
- URL: http://arxiv.org/abs/2407.17474v1
- Date: Thu, 27 Jun 2024 20:03:27 GMT
- Title: "My Kind of Woman": Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law
- Authors: Miriam Doh, and Anastasia Karagianni,
- Abstract summary: This study delves into gender classification systems, shedding light on the interaction between social stereotypes and algorithmic determinations.
By incorporating cognitive psychology and feminist legal theory, we examine how data used for AI training can foster gender diversity and fairness.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study delves into gender classification systems, shedding light on the interaction between social stereotypes and algorithmic determinations. Drawing on the "averageness theory," which suggests a relationship between a face's attractiveness and the human ability to ascertain its gender, we explore the potential propagation of human bias into artificial intelligence (AI) systems. Utilising the AI model Stable Diffusion 2.1, we have created a dataset containing various connotations of attractiveness to test whether the correlation between attractiveness and accuracy in gender classification observed in human cognition persists within AI. Our findings indicate that akin to human dynamics, AI systems exhibit variations in gender classification accuracy based on attractiveness, mirroring social prejudices and stereotypes in their algorithmic decisions. This discovery underscores the critical need to consider the impacts of human perceptions on data collection and highlights the necessity for a multidisciplinary and intersectional approach to AI development and AI data training. By incorporating cognitive psychology and feminist legal theory, we examine how data used for AI training can foster gender diversity and fairness under the scope of the AI Act and GDPR, reaffirming how psychological and feminist legal theories can offer valuable insights for ensuring the protection of gender equality and non-discrimination in AI systems.
Related papers
- Perceptions of Discriminatory Decisions of Artificial Intelligence: Unpacking the Role of Individual Characteristics [0.0]
Personal differences (digital self-efficacy, technical knowledge, belief in equality, political ideology) are associated with perceptions of AI outcomes.
Digital self-efficacy and technical knowledge are positively associated with attitudes toward AI.
Liberal ideologies are negatively associated with outcome trust, higher negative emotion, and greater skepticism.
arXiv Detail & Related papers (2024-10-17T06:18:26Z) - Artificial Intelligence (AI) Onto-norms and Gender Equality: Unveiling the Invisible Gender Norms in AI Ecosystems in the Context of Africa [3.7498611358320733]
The study examines how ontonorms propagate certain gender practices in digital spaces through character and the norms of spaces that shape AI design, training and use.
By examining how data and content can knowingly or unknowingly be used to drive certain social norms in the AI ecosystems, this study argues that ontonorms shape how AI engages with the content that relates to women.
arXiv Detail & Related papers (2024-08-22T22:54:02Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and
Addressing Sociological Implications [0.0]
The study examines existing research on gender bias in AI language models and identifies gaps in the current knowledge.
The findings shed light on gendered word associations, language usage, and biased narratives present in the outputs of Large Language Models.
The paper presents strategies for reducing gender bias in LLMs, including algorithmic approaches and data augmentation techniques.
arXiv Detail & Related papers (2023-07-18T11:38:45Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Responsible AI: Gender bias assessment in emotion recognition [6.833826997240138]
This research work aims to study a gender bias in deep learning methods for facial expression recognition.
More biased neural networks show bigger accuracy gap in emotion recognition between male and female test sets.
arXiv Detail & Related papers (2021-03-21T17:00:21Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.