Implicit Gender Bias in Computer Science -- A Qualitative Study
- URL: http://arxiv.org/abs/2107.01624v1
- Date: Sun, 4 Jul 2021 13:30:26 GMT
- Title: Implicit Gender Bias in Computer Science -- A Qualitative Study
- Authors: Aur\'elie Breidenbach and Caroline Mahlow and Andreas Schreiber
- Abstract summary: Gender diversity in the tech sector is sufficient to create a balanced ratio of men and women.
For many women, access to computer science is hampered by socialization-related, social, cultural and structural obstacles.
The lack of contact in areas of computer science makes it difficult to develop or expand potential interests.
- Score: 3.158346511479111
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gender diversity in the tech sector is - not yet? - sufficient to create a
balanced ratio of men and women. For many women, access to computer science is
hampered by socialization-related, social, cultural and structural obstacles.
The so-called implicit gender bias has a great influence in this respect. The
lack of contact in areas of computer science makes it difficult to develop or
expand potential interests. Female role models as well as more transparency of
the job description should help women to promote their - possible - interest in
the job description. However, gender diversity can also be promoted and
fostered through adapted measures by leaders.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Thinking beyond Bias: Analyzing Multifaceted Impacts and Implications of AI on Gendered Labour [1.5839621757142595]
This paper emphasizes the need to explore AIs broader impacts on gendered labor.
We draw attention to how the AI industry as an integral component of the larger economic structure is transforming the nature of work.
arXiv Detail & Related papers (2024-06-23T20:09:53Z) - Perceptions of Entrepreneurship Among Graduate Students: Challenges, Opportunities, and Cultural Biases [0.0]
The paper focuses on the perceptions of entrepreneurship of graduate students enrolled in a digital-oriented entrepreneurship course.
The main issues raised by the students were internal traits and external obstacles, such as lack of resources and support.
Gender discrimination and cultural biases persist, limiting opportunities and equality for women.
arXiv Detail & Related papers (2024-05-10T08:35:18Z) - Breaking Barriers: Investigating the Sense of Belonging Among Women and Non-Binary Students in Software Engineering [1.9075820340282934]
Women are far less likely to pursue a career in the software engineering industry.
Reasons for women and other underrepresented minorities to leave the industry are a lack of opportunities for growth and advancement.
This research explores how the potential to cultivate or uphold an industry unfavourable to women and non-binary individuals in software engineering education.
arXiv Detail & Related papers (2024-05-06T20:07:45Z) - Social Skill Training with Large Language Models [65.40795606463101]
People rely on social skills like conflict resolution to communicate effectively and to thrive in both work and personal life.
This perspective paper identifies social skill barriers to enter specialized fields.
We present a solution that leverages large language models for social skill training via a generic framework.
arXiv Detail & Related papers (2024-04-05T16:29:58Z) - The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender
Characterisation in 55 Languages [51.2321117760104]
This paper describes the Gender-GAP Pipeline, an automatic pipeline to characterize gender representation in large-scale datasets for 55 languages.
The pipeline uses a multilingual lexicon of gendered person-nouns to quantify the gender representation in text.
We showcase it to report gender representation in WMT training data and development data for the News task, confirming that current data is skewed towards masculine representation.
arXiv Detail & Related papers (2023-08-31T17:20:50Z) - Monitoring Gender Gaps via LinkedIn Advertising Estimates: the case
study of Italy [3.5493798890908104]
We evaluate the potential of the LinkedIn estimates to monitor the evolution of the gender gaps sustainably.
Our findings show that the LinkedIn estimates accurately capture the gender disparities in Italy regarding sociodemographic attributes.
At the same time, we assess data biases such as the digitalisation gap, which impacts the representativity of the workforce in an imbalanced manner.
arXiv Detail & Related papers (2023-03-10T11:32:45Z) - Dynamics of Gender Bias in Computing [0.0]
This article presents a new dataset focusing on formative years of computing as a profession (1950-1980) when U.S. government workforce statistics are thin or non-existent.
It revises commonly held conjectures that gender bias in computing emerged during professionalization of computer science in the 1960s or 1970s.
arXiv Detail & Related papers (2022-11-07T23:29:56Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.