Data-Driven Analysis of Gender Fairness in the Software Engineering
Academic Landscape
- URL: http://arxiv.org/abs/2309.11239v1
- Date: Wed, 20 Sep 2023 12:04:56 GMT
- Title: Data-Driven Analysis of Gender Fairness in the Software Engineering
Academic Landscape
- Authors: Giordano d'Aloisio, Andrea D'Angelo, Francesca Marzi, Diana Di Marco,
Giovanni Stilo, and Antinisca Di Marco
- Abstract summary: We study the problem of gender bias in academic promotions in the informatics (INF) and software engineering (SE) Italian communities.
We first conduct a literature review to assess how the problem of gender bias in academia has been addressed so far.
Next, we describe a process to collect and preprocess the INF and SE data needed to analyse gender bias in Italian academic promotions.
From the conducted analysis, we observe how the SE community presents a higher bias in promotions to Associate Professors and a smaller bias in promotions to Full Professors compared to the overall INF community.
- Score: 4.580653005421453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gender bias in education gained considerable relevance in the literature over
the years. However, while the problem of gender bias in education has been
widely addressed from a student perspective, it is still not fully analysed
from an academic point of view. In this work, we study the problem of gender
bias in academic promotions (i.e., from Researcher to Associated Professor and
from Associated to Full Professor) in the informatics (INF) and software
engineering (SE) Italian communities. In particular, we first conduct a
literature review to assess how the problem of gender bias in academia has been
addressed so far. Next, we describe a process to collect and preprocess the INF
and SE data needed to analyse gender bias in Italian academic promotions.
Subsequently, we apply a formal bias metric to these data to assess the amount
of bias and look at its variation over time. From the conducted analysis, we
observe how the SE community presents a higher bias in promotions to Associate
Professors and a smaller bias in promotions to Full Professors compared to the
overall INF community.
Related papers
- Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.
GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Towards Region-aware Bias Evaluation Metrics [26.91545185271231]
We identify topical differences in gender bias across different regions and propose a region-aware bottom-up approach for bias assessment.
Our proposed approach uses gender-aligned topics for a given region and identifies gender bias dimensions in the form of topic pairs.
Several of our proposed bias topic pairs are on par with human perception of gender biases in these regions in comparison to the existing ones.
arXiv Detail & Related papers (2024-06-23T16:26:27Z) - Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation [47.770531682802314]
Even simple prompts could cause T2I models to exhibit conspicuous social bias in generated images.
We present the first extensive survey on bias in T2I generative models.
We discuss how these works define, evaluate, and mitigate different aspects of bias.
arXiv Detail & Related papers (2024-04-01T10:19:05Z) - Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and
Addressing Sociological Implications [0.0]
The study examines existing research on gender bias in AI language models and identifies gaps in the current knowledge.
The findings shed light on gendered word associations, language usage, and biased narratives present in the outputs of Large Language Models.
The paper presents strategies for reducing gender bias in LLMs, including algorithmic approaches and data augmentation techniques.
arXiv Detail & Related papers (2023-07-18T11:38:45Z) - Gender Bias in Big Data Analysis [0.0]
It measures gender bias when gender prediction software tools are used in historical big data research.
Gender bias is measured by contrasting personally identified computer science authors in the well-regarded DBLP dataset.
arXiv Detail & Related papers (2022-11-17T20:13:04Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Assessing Gender Bias in the Information Systems Field: An Analysis of
the Impact on Citations [0.0]
This paper outlines a study to estimate the impact of scholarly citations that female IS academics accumulate vis-a-vis their male colleagues.
By doing so we propose to contribute knowledge on a core dimension of gender bias in academia, which is, so far, almost completely unexplored in the IS field.
arXiv Detail & Related papers (2021-08-22T18:18:52Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.