Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
- URL: http://arxiv.org/abs/2209.09592v1
- Date: Tue, 20 Sep 2022 10:11:40 GMT
- Title: Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
- Authors: Clara Rus, Jeffrey Luppes, Harrie Oosterhuis, Gido H. Schoenmacker
- Abstract summary: This work aims to help mitigate the already existing gender wage gap by supplying unbiased job recommendations based on resumes from job seekers.
We employ a generative adversarial network to remove gender bias from word2vec representations of 12M job vacancy texts and 900k resumes.
- Score: 10.227479910430866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of this work is to help mitigate the already existing gender wage
gap by supplying unbiased job recommendations based on resumes from job
seekers. We employ a generative adversarial network to remove gender bias from
word2vec representations of 12M job vacancy texts and 900k resumes. Our results
show that representations created from recruitment texts contain algorithmic
bias and that this bias results in real-world consequences for recommendation
systems. Without controlling for bias, women are recommended jobs with
significantly lower salary in our data. With adversarially fair
representations, this wage gap disappears, meaning that our debiased job
recommendations reduce wage discrimination. We conclude that adversarial
debiasing of word representations can increase real-world fairness of systems
and thus may be part of the solution for creating fairness-aware recommendation
systems.
Related papers
- GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - The Impact of Debiasing on the Performance of Language Models in
Downstream Tasks is Underestimated [70.23064111640132]
We compare the impact of debiasing on performance across multiple downstream tasks using a wide-range of benchmark datasets.
Experiments show that the effects of debiasing are consistently emphunderestimated across all tasks.
arXiv Detail & Related papers (2023-09-16T20:25:34Z) - The Resume Paradox: Greater Language Differences, Smaller Pay Gaps [0.0]
We analyze the language in millions of US workers' resumes to investigate how differences in workers' self-representation by gender compare to differences in earnings.
Across US occupations, language differences between male and female resumes correspond to 11% of the variation in gender pay gap.
A doubling of the language difference between female and male resumes results in an annual wage increase of $2,797 for the average female worker.
arXiv Detail & Related papers (2023-07-17T15:49:35Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Discrimination through Image Selection by Job Advertisers on Facebook [79.21648699199648]
We propose and investigate the prevalence of a new means for discrimination in job advertising.
It combines both targeting and delivery -- through the disproportionate representation or exclusion of people of certain demographics in job ad images.
We use the Facebook Ad Library to demonstrate the prevalence of this practice.
arXiv Detail & Related papers (2023-06-13T03:43:58Z) - Efficient Gender Debiasing of Pre-trained Indic Language Models [0.0]
The gender bias present in the data on which language models are pre-trained gets reflected in the systems that use these models.
In our paper, we measure gender bias associated with occupations in Hindi language models.
Our results reflect that the bias is reduced post-introduction of our proposed mitigation techniques.
arXiv Detail & Related papers (2022-09-08T09:15:58Z) - Gendered Language in Resumes and its Implications for Algorithmic Bias
in Hiring [0.0]
We train a series of models to classify the gender of the applicant.
We investigate whether it is possible to obfuscate gender from resumes.
We find that there is a significant amount of gendered information in resumes even after obfuscation.
arXiv Detail & Related papers (2021-12-16T14:26:36Z) - Sexism in the Judiciary [0.0]
We analyze 6.7 million case law documents to determine the presence of gender bias within our judicial system.
We find that current bias detectino methods in NLP are insufficient to determine gender bias in our case law database.
arXiv Detail & Related papers (2021-06-29T05:38:53Z) - Auditing for Discrimination in Algorithms Delivering Job Ads [70.02478301291264]
We develop a new methodology for black-box auditing of algorithms for discrimination in the delivery of job advertisements.
Our first contribution is to identify the distinction between skew in ad delivery due to protected categories such as gender or race.
Second, we develop an auditing methodology that distinguishes between skew explainable by differences in qualifications from other factors.
Third, we apply our proposed methodology to two prominent targeted advertising platforms for job ads: Facebook and LinkedIn.
arXiv Detail & Related papers (2021-04-09T17:38:36Z) - Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased
Proximities in Word Embeddings [37.65897382453336]
Existing post-processing methods for debiasing word embeddings are unable to mitigate gender bias hidden in the spatial arrangement of word vectors.
We propose RAN-Debias, a novel gender debiasing methodology which eliminates the bias present in a word vector but also alters the spatial distribution of its neighbouring vectors.
We also propose a new bias evaluation metric - Gender-based Illicit Proximity Estimate (GIPE)
arXiv Detail & Related papers (2020-06-02T20:50:43Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.