Gender Bias in Computing
- URL: http://arxiv.org/abs/2210.16449v1
- Date: Sat, 29 Oct 2022 00:10:25 GMT
- Title: Gender Bias in Computing
- Authors: Thomas J. Misa
- Abstract summary: It offers new quantitative data on the computing workforce prior to the availability of US Census data in the 1970s.
A novel method of gender analysis is developed to estimate women's and men's participation in computing beginning in the 1950s.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper examines the historical dimension of gender bias in the US
computing workforce. It offers new quantitative data on the computing workforce
prior to the availability of US Census data in the 1970s. Computer user groups
(including SHARE, Inc., and the Mark IV software user group) are taken as a
cross-section of the computing workforce. A novel method of gender analysis is
developed to estimate women's and men's participation in computing beginning in
the 1950s. The data presented here are consistent with well-known NSF
statistics that show computer science undergraduate programs enrolling
increasing numbers of women students during 1965-1985. These findings challenge
the 'making programming masculine' thesis, and serve to correct the
unrealistically high figures often cited for women's participation in early
computer programming. Gender bias in computing today is traced not to 1960s
professionalization but to cultural changes in the 1980s and beyond.
Related papers
- Dynamics of Gender Bias within Computer Science [0.0]
ACM SIGs expanded during 1970-2000; each experienced increasing women's authorship.
Several SIGs had fewer than 10% women authors while SIGUCCS exceeded 40%.
Three SIGs experienced accelerating growth in women's authorship; most, including a composite ACM, had decelerating growth.
arXiv Detail & Related papers (2024-07-11T00:14:21Z) - The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender
Characterisation in 55 Languages [51.2321117760104]
This paper describes the Gender-GAP Pipeline, an automatic pipeline to characterize gender representation in large-scale datasets for 55 languages.
The pipeline uses a multilingual lexicon of gendered person-nouns to quantify the gender representation in text.
We showcase it to report gender representation in WMT training data and development data for the News task, confirming that current data is skewed towards masculine representation.
arXiv Detail & Related papers (2023-08-31T17:20:50Z) - Gender Bias in Big Data Analysis [0.0]
It measures gender bias when gender prediction software tools are used in historical big data research.
Gender bias is measured by contrasting personally identified computer science authors in the well-regarded DBLP dataset.
arXiv Detail & Related papers (2022-11-17T20:13:04Z) - Dynamics of Gender Bias in Computing [0.0]
This article presents a new dataset focusing on formative years of computing as a profession (1950-1980) when U.S. government workforce statistics are thin or non-existent.
It revises commonly held conjectures that gender bias in computing emerged during professionalization of computer science in the 1960s or 1970s.
arXiv Detail & Related papers (2022-11-07T23:29:56Z) - Temporal Analysis and Gender Bias in Computing [0.0]
Many names change ascribed gender over decades: the "Leslie problem"
This article identifies 300 given names with measurable "gender shifts" across 1925-1975.
This article demonstrates, quantitatively, there is net "female shift" that likely results in the overcounting of women (and undercounting of men) in earlier decades.
arXiv Detail & Related papers (2022-09-29T00:29:43Z) - Gender Representation in Brazilian Computer Science Conferences [0.6961253535504979]
This study presents an automated bibliometric analysis of 6569 research papers published in thirteen Brazilian Computer Science Society (SBC) conferences from 1999 to 2021.
We applied a systematic assignment of gender to 23.573 listed papers authorships, finding that the gender gap for women is significant.
arXiv Detail & Related papers (2022-08-23T15:10:10Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Improving Gender Fairness of Pre-Trained Language Models without
Catastrophic Forgetting [88.83117372793737]
Forgetting information in the original training data may damage the model's downstream performance by a large margin.
We propose GEnder Equality Prompt (GEEP) to improve gender fairness of pre-trained models with less forgetting.
arXiv Detail & Related papers (2021-10-11T15:52:16Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.