Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
- URL: http://arxiv.org/abs/2209.09592v1
- Date: Tue, 20 Sep 2022 10:11:40 GMT
- Title: Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
- Authors: Clara Rus, Jeffrey Luppes, Harrie Oosterhuis, Gido H. Schoenmacker
- Abstract summary: This work aims to help mitigate the already existing gender wage gap by supplying unbiased job recommendations based on resumes from job seekers.
We employ a generative adversarial network to remove gender bias from word2vec representations of 12M job vacancy texts and 900k resumes.
- Score: 10.227479910430866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of this work is to help mitigate the already existing gender wage
gap by supplying unbiased job recommendations based on resumes from job
seekers. We employ a generative adversarial network to remove gender bias from
word2vec representations of 12M job vacancy texts and 900k resumes. Our results
show that representations created from recruitment texts contain algorithmic
bias and that this bias results in real-world consequences for recommendation
systems. Without controlling for bias, women are recommended jobs with
significantly lower salary in our data. With adversarially fair
representations, this wage gap disappears, meaning that our debiased job
recommendations reduce wage discrimination. We conclude that adversarial
debiasing of word representations can increase real-world fairness of systems
and thus may be part of the solution for creating fairness-aware recommendation
systems.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - JobFair: A Framework for Benchmarking Gender Hiring Bias in Large Language Models [12.12628747941818]
This paper presents a novel framework for benchmarking hierarchical gender hiring bias in Large Language Models (LLMs) for resume scoring.
We introduce a new construct grounded in labour economics, legal principles, and critiques of current bias benchmarks.
We analyze gender hiring biases in ten state-of-the-art LLMs.
arXiv Detail & Related papers (2024-06-17T09:15:57Z) - The Impact of Debiasing on the Performance of Language Models in
Downstream Tasks is Underestimated [70.23064111640132]
We compare the impact of debiasing on performance across multiple downstream tasks using a wide-range of benchmark datasets.
Experiments show that the effects of debiasing are consistently emphunderestimated across all tasks.
arXiv Detail & Related papers (2023-09-16T20:25:34Z) - The Resume Paradox: Greater Language Differences, Smaller Pay Gaps [0.0]
We analyze the language in millions of US workers' resumes to investigate how differences in workers' self-representation by gender compare to differences in earnings.
Across US occupations, language differences between male and female resumes correspond to 11% of the variation in gender pay gap.
A doubling of the language difference between female and male resumes results in an annual wage increase of $2,797 for the average female worker.
arXiv Detail & Related papers (2023-07-17T15:49:35Z) - Gendered Language in Resumes and its Implications for Algorithmic Bias
in Hiring [0.0]
We train a series of models to classify the gender of the applicant.
We investigate whether it is possible to obfuscate gender from resumes.
We find that there is a significant amount of gendered information in resumes even after obfuscation.
arXiv Detail & Related papers (2021-12-16T14:26:36Z) - Auditing for Discrimination in Algorithms Delivering Job Ads [70.02478301291264]
We develop a new methodology for black-box auditing of algorithms for discrimination in the delivery of job advertisements.
Our first contribution is to identify the distinction between skew in ad delivery due to protected categories such as gender or race.
Second, we develop an auditing methodology that distinguishes between skew explainable by differences in qualifications from other factors.
Third, we apply our proposed methodology to two prominent targeted advertising platforms for job ads: Facebook and LinkedIn.
arXiv Detail & Related papers (2021-04-09T17:38:36Z) - Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased
Proximities in Word Embeddings [37.65897382453336]
Existing post-processing methods for debiasing word embeddings are unable to mitigate gender bias hidden in the spatial arrangement of word vectors.
We propose RAN-Debias, a novel gender debiasing methodology which eliminates the bias present in a word vector but also alters the spatial distribution of its neighbouring vectors.
We also propose a new bias evaluation metric - Gender-based Illicit Proximity Estimate (GIPE)
arXiv Detail & Related papers (2020-06-02T20:50:43Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.