The Resume Paradox: Greater Language Differences, Smaller Pay Gaps
- URL: http://arxiv.org/abs/2307.08580v1
- Date: Mon, 17 Jul 2023 15:49:35 GMT
- Title: The Resume Paradox: Greater Language Differences, Smaller Pay Gaps
- Authors: Joshua R. Minot, Marc Maier, Bradford Demarest, Nicholas Cheney,
Christopher M. Danforth, Peter Sheridan Dodds, and Morgan R. Frank
- Abstract summary: We analyze the language in millions of US workers' resumes to investigate how differences in workers' self-representation by gender compare to differences in earnings.
Across US occupations, language differences between male and female resumes correspond to 11% of the variation in gender pay gap.
A doubling of the language difference between female and male resumes results in an annual wage increase of $2,797 for the average female worker.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decade, the gender pay gap has remained steady with women
earning 84 cents for every dollar earned by men on average. Many studies
explain this gap through demand-side bias in the labor market represented
through employers' job postings. However, few studies analyze potential bias
from the worker supply-side. Here, we analyze the language in millions of US
workers' resumes to investigate how differences in workers' self-representation
by gender compare to differences in earnings. Across US occupations, language
differences between male and female resumes correspond to 11% of the variation
in gender pay gap. This suggests that females' resumes that are semantically
similar to males' resumes may have greater wage parity. However, surprisingly,
occupations with greater language differences between male and female resumes
have lower gender pay gaps. A doubling of the language difference between
female and male resumes results in an annual wage increase of $2,797 for the
average female worker. This result holds with controls for gender-biases of
resume text and we find that per-word bias poorly describes the variance in
wage gap. The results demonstrate that textual data and self-representation are
valuable factors for improving worker representations and understanding
employment inequities.
Related papers
- Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Exploring the Impact of Training Data Distribution and Subword
Tokenization on Gender Bias in Machine Translation [19.719314005149883]
We study the effect of tokenization on gender bias in machine translation.
We observe that female and non-stereotypical gender inflections of profession names tend to be split into subword tokens.
We show that analyzing subword splits provides good estimates of gender-form imbalance in the training data.
arXiv Detail & Related papers (2023-09-21T21:21:55Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Professional Presentation and Projected Power: A Case Study of Implicit
Gender Information in English CVs [8.947168670095326]
This paper investigates the framing of skills and background in CVs of self-identified men and women.
We introduce a data set of 1.8K authentic, English-language, CVs from the US, covering 16 occupations.
arXiv Detail & Related papers (2022-11-17T23:26:52Z) - Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation [10.227479910430866]
This work aims to help mitigate the already existing gender wage gap by supplying unbiased job recommendations based on resumes from job seekers.
We employ a generative adversarial network to remove gender bias from word2vec representations of 12M job vacancy texts and 900k resumes.
arXiv Detail & Related papers (2022-09-20T10:11:40Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Gendered Language in Resumes and its Implications for Algorithmic Bias
in Hiring [0.0]
We train a series of models to classify the gender of the applicant.
We investigate whether it is possible to obfuscate gender from resumes.
We find that there is a significant amount of gendered information in resumes even after obfuscation.
arXiv Detail & Related papers (2021-12-16T14:26:36Z) - Quantifying Gender Bias Towards Politicians in Cross-Lingual Language
Models [104.41668491794974]
We quantify the usage of adjectives and verbs generated by language models surrounding the names of politicians as a function of their gender.
We find that while some words such as dead, and designated are associated with both male and female politicians, a few specific words such as beautiful and divorced are predominantly associated with female politicians.
arXiv Detail & Related papers (2021-04-15T15:03:26Z) - How to Measure Gender Bias in Machine Translation: Optimal Translators,
Multiple Reference Points [0.0]
We translate sentences containing names of occupations from Hungarian, a language with gender-neutral pronouns, into English.
Our aim was to present a fair measure for bias by comparing the translations to an optimal non-biased translator.
As a result, we found bias against both genders, but biased results against women are much more frequent.
arXiv Detail & Related papers (2020-11-12T15:39:22Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.