Artificial Intelligence (AI) Onto-norms and Gender Equality: Unveiling the Invisible Gender Norms in AI Ecosystems in the Context of Africa
- URL: http://arxiv.org/abs/2408.12754v1
- Date: Thu, 22 Aug 2024 22:54:02 GMT
- Title: Artificial Intelligence (AI) Onto-norms and Gender Equality: Unveiling the Invisible Gender Norms in AI Ecosystems in the Context of Africa
- Authors: Angella Ndaka, Harriet Ratemo, Abigail Oppong, Eucabeth Majiwa,
- Abstract summary: The study examines how ontonorms propagate certain gender practices in digital spaces through character and the norms of spaces that shape AI design, training and use.
By examining how data and content can knowingly or unknowingly be used to drive certain social norms in the AI ecosystems, this study argues that ontonorms shape how AI engages with the content that relates to women.
- Score: 3.7498611358320733
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The study examines how ontonorms propagate certain gender practices in digital spaces through character and the norms of spaces that shape AI design, training and use. Additionally the study explores the different user behaviours and practices regarding whether, how, when, and why different gender groups engage in and with AI driven spaces. By examining how data and content can knowingly or unknowingly be used to drive certain social norms in the AI ecosystems, this study argues that ontonorms shape how AI engages with the content that relates to women. Ontonorms specifically shape the image, behaviour, and other media, including how gender identities and perspectives are intentionally or otherwise, included, missed, or misrepresented in building and training AI systems.
Related papers
- She Works, He Works: A Curious Exploration of Gender Bias in AI-Generated Imagery [0.0]
This paper examines gender bias in AI-generated imagery of construction workers, highlighting discrepancies in the portrayal of male and female figures.
Grounded in Griselda Pollock's theories on visual culture and gender, the analysis reveals that AI models tend to sexualize female figures while portraying male figures as more authoritative and competent.
arXiv Detail & Related papers (2024-07-26T05:56:18Z) - "My Kind of Woman": Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law [0.0]
This study delves into gender classification systems, shedding light on the interaction between social stereotypes and algorithmic determinations.
By incorporating cognitive psychology and feminist legal theory, we examine how data used for AI training can foster gender diversity and fairness.
arXiv Detail & Related papers (2024-06-27T20:03:27Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Enactive Artificial Intelligence: Subverting Gender Norms in Robot-Human
Interaction [0.0]
This paper introduces Enactive Artificial Intelligence (eAI) as an intersectional gender-inclusive stance towards AI.
AI design is an enacted human sociocultural practice that reflects human culture and values.
arXiv Detail & Related papers (2023-01-17T21:27:20Z) - Cultural Incongruencies in Artificial Intelligence [5.817158625734485]
We describe a set of cultural dependencies and incongruencies in the context of AI-based language and vision technologies.
Problems arise when these technologies interact with globally diverse societies and cultures, with different values and interpretive practices.
arXiv Detail & Related papers (2022-11-19T18:45:02Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Mitigating Gender Bias in Machine Learning Data Sets [5.075506385456811]
Gender bias has been identified in the context of employment advertising and recruitment tools.
This paper proposes a framework for the identification of gender bias in training data for machine learning.
arXiv Detail & Related papers (2020-05-14T12:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.