An Empirical Study on the Impact of Gender Diversity on Code Quality in AI Systems
- URL: http://arxiv.org/abs/2505.03082v1
- Date: Tue, 06 May 2025 00:37:27 GMT
- Title: An Empirical Study on the Impact of Gender Diversity on Code Quality in AI Systems
- Authors: Shamse Tasnim Cynthia, Banani Roy,
- Abstract summary: Underrepresentation of women in software engineering raises concerns about robustness in AI development.<n>This study examines how gender diversity within AI teams influences project popularity, code quality, and individual contributions.
- Score: 2.2160604288512324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid advancement of AI systems necessitates high-quality, sustainable code to ensure reliability and mitigate risks such as bias and technical debt. However, the underrepresentation of women in software engineering raises concerns about homogeneity in AI development. Studying gender diversity in AI systems is crucial, as diverse perspectives are essential for improving system robustness, reducing bias, and enhancing overall code quality. While prior research has demonstrated the benefits of diversity in general software teams, its specific impact on the code quality of AI systems remains unexplored. This study addresses this gap by examining how gender diversity within AI teams influences project popularity, code quality, and individual contributions. Our study makes three key contributions. First, we analyzed the relationship between team diversity and repository popularity, revealing that diverse AI repositories not only differ significantly from non-diverse ones but also achieve higher popularity and greater community engagement. Second, we explored the effect of diversity on the overall code quality of AI systems and found that diverse repositories tend to have superior code quality compared to non-diverse ones. Finally, our analysis of individual contributions revealed that although female contributors contribute to a smaller proportion of the total code, their contributions demonstrate consistently higher quality than those of their male counterparts. These findings highlight the need to remove barriers to female participation in AI development, as greater diversity can improve the overall quality of AI systems.
Related papers
- Bridging the Divide: Gender, Diversity, and Inclusion Gaps in Data Science and Artificial Intelligence Across Academia and Industry in the majority and minority worlds [0.5076419064097732]
This chapter examines the participation of women and minorities in AI and DS, focusing on their representation in both industry and academia.<n>The dominance of men in AI and DS reinforces gender biases in machine learning systems, creating a feedback loop of inequality.<n>This imbalance is a matter of social and economic justice and an ethical challenge, demanding value-driven diversity.
arXiv Detail & Related papers (2025-11-23T18:09:31Z) - The Impact of Team Diversity in Agile Development Education [2.963223599781967]
We aim to assess the impact of team diversity, focusing mainly on gender and nationality, in the context of an agile software development project-based course.<n>We analyzed 51 teams over three academic years, measuring three different Diversity indexes - regarding Gender, Nationality and their co-presence.<n>Our findings, overall, show that promoting diversity in teams does not negatively impact their performance and achievement of educational goals.
arXiv Detail & Related papers (2025-09-10T08:32:50Z) - Jointly Reinforcing Diversity and Quality in Language Model Generations [64.72289248044514]
Post-training of Large Language Models (LMs) often prioritizes accuracy and helpfulness at the expense of diversity.<n>We address this challenge with Diversity-Aware Reinforcement Learning (DARLING), a framework that jointly optimize for response quality and semantic diversity.
arXiv Detail & Related papers (2025-09-02T17:38:47Z) - Diversity and Inclusion in AI: Insights from a Survey of AI/ML Practitioners [4.761639988815896]
Growing awareness of social biases and inequalities embedded in Artificial Intelligence (AI) systems has brought increased attention to the integration of Diversity and Inclusion (D&I) principles throughout the AI lifecycle.<n>Despite the rise of ethical AI guidelines, there is limited empirical evidence on how D&I is applied in real-world settings.<n>This study explores how AI and Machine Learning(ML) practitioners perceive and implement D&I principles and identifies organisational challenges that hinder their effective adoption.
arXiv Detail & Related papers (2025-05-24T05:40:23Z) - Perceptions of Discriminatory Decisions of Artificial Intelligence: Unpacking the Role of Individual Characteristics [0.0]
Personal differences (digital self-efficacy, technical knowledge, belief in equality, political ideology) are associated with perceptions of AI outcomes.
Digital self-efficacy and technical knowledge are positively associated with attitudes toward AI.
Liberal ideologies are negatively associated with outcome trust, higher negative emotion, and greater skepticism.
arXiv Detail & Related papers (2024-10-17T06:18:26Z) - Why AI Is WEIRD and Should Not Be This Way: Towards AI For Everyone, With Everyone, By Everyone [47.19142377073831]
This paper presents a vision for creating AI systems that are inclusive at every stage of development.
We address key limitations in the current AI pipeline and its WEIRD representation.
arXiv Detail & Related papers (2024-10-09T10:44:26Z) - GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.<n>GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - "My Kind of Woman": Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law [0.0]
This study delves into gender classification systems, shedding light on the interaction between social stereotypes and algorithmic determinations.
By incorporating cognitive psychology and feminist legal theory, we examine how data used for AI training can foster gender diversity and fairness.
arXiv Detail & Related papers (2024-06-27T20:03:27Z) - Quality-Diversity through AI Feedback [10.423093353553217]
Quality-diversity (QD) search algorithms aim at continually improving and diversifying a population of candidates.
Recent developments in language models (LMs) have enabled guiding search through AI feedback.
QDAIF is a step towards AI systems that can independently search, diversify, evaluate, and improve.
arXiv Detail & Related papers (2023-10-19T12:13:58Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - How diverse is the ACII community? Analysing gender, geographical and
business diversity of Affective Computing research [0.0]
ACII is the premier international forum for presenting the latest research on affective computing.
We measure diversity in terms of gender, geographic location and academia vs research centres vs industry, and consider three different actors: authors, keynote speakers and organizers.
Results raise awareness on the limited diversity in the field, in all studied facets, and compared to other AI conferences.
arXiv Detail & Related papers (2021-09-12T18:30:36Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.