Assessing Gender Bias in the Information Systems Field: An Analysis of
the Impact on Citations
- URL: http://arxiv.org/abs/2108.12255v1
- Date: Sun, 22 Aug 2021 18:18:52 GMT
- Title: Assessing Gender Bias in the Information Systems Field: An Analysis of
the Impact on Citations
- Authors: Silvia Masiero and Aleksi Aaltonen
- Abstract summary: This paper outlines a study to estimate the impact of scholarly citations that female IS academics accumulate vis-a-vis their male colleagues.
By doing so we propose to contribute knowledge on a core dimension of gender bias in academia, which is, so far, almost completely unexplored in the IS field.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Gender bias, a systemic and unfair difference in how men and women are
treated in a given domain, is widely studied across different academic fields.
Yet, there are barely any studies of the phenomenon in the field of academic
information systems (IS), which is surprising especially in the light of the
proliferation of such studies in the Science, Technology, Mathematics and
Technology (STEM) disciplines. To assess potential gender bias in the IS field,
this paper outlines a study to estimate the impact of scholarly citations that
female IS academics accumulate vis-\`a-vis their male colleagues. Drawing on a
scientometric study of the 7,260 papers published in the most prestigious IS
journals (known as the AIS Basket of Eight), our analysis aims to unveil
potential bias in the accumulation of citations between genders in the field.
We use panel regression to estimate the gendered citations accumulation in the
field. By doing so we propose to contribute knowledge on a core dimension of
gender bias in academia, which is, so far, almost completely unexplored in the
IS field.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation [47.770531682802314]
Even simple prompts could cause T2I models to exhibit conspicuous social bias in generated images.
We present the first extensive survey on bias in T2I generative models.
We discuss how these works define, evaluate, and mitigate different aspects of bias.
arXiv Detail & Related papers (2024-04-01T10:19:05Z) - Data-Driven Analysis of Gender Fairness in the Software Engineering
Academic Landscape [4.580653005421453]
We study the problem of gender bias in academic promotions in the informatics (INF) and software engineering (SE) Italian communities.
We first conduct a literature review to assess how the problem of gender bias in academia has been addressed so far.
Next, we describe a process to collect and preprocess the INF and SE data needed to analyse gender bias in Italian academic promotions.
From the conducted analysis, we observe how the SE community presents a higher bias in promotions to Associate Professors and a smaller bias in promotions to Full Professors compared to the overall INF community.
arXiv Detail & Related papers (2023-09-20T12:04:56Z) - Gender Inequalities: Women Researchers Require More Knowledge in
Specific and Experimental Topics [1.4916971861796386]
This study analyzes the relationship between regional and gender identities, topics, and knowledge status.
We find that gender inequalities are merged with both regional-specific characteristics and global common patterns.
arXiv Detail & Related papers (2023-09-05T05:36:06Z) - Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and
Addressing Sociological Implications [0.0]
The study examines existing research on gender bias in AI language models and identifies gaps in the current knowledge.
The findings shed light on gendered word associations, language usage, and biased narratives present in the outputs of Large Language Models.
The paper presents strategies for reducing gender bias in LLMs, including algorithmic approaches and data augmentation techniques.
arXiv Detail & Related papers (2023-07-18T11:38:45Z) - Gender Bias in Big Data Analysis [0.0]
It measures gender bias when gender prediction software tools are used in historical big data research.
Gender bias is measured by contrasting personally identified computer science authors in the well-regarded DBLP dataset.
arXiv Detail & Related papers (2022-11-17T20:13:04Z) - Dynamics of Gender Bias in Computing [0.0]
This article presents a new dataset focusing on formative years of computing as a profession (1950-1980) when U.S. government workforce statistics are thin or non-existent.
It revises commonly held conjectures that gender bias in computing emerged during professionalization of computer science in the 1960s or 1970s.
arXiv Detail & Related papers (2022-11-07T23:29:56Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.