Sexism in the Judiciary
- URL: http://arxiv.org/abs/2106.15103v1
- Date: Tue, 29 Jun 2021 05:38:53 GMT
- Title: Sexism in the Judiciary
- Authors: Noa Baker Gillis
- Abstract summary: We analyze 6.7 million case law documents to determine the presence of gender bias within our judicial system.
We find that current bias detectino methods in NLP are insufficient to determine gender bias in our case law database.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We analyze 6.7 million case law documents to determine the presence of gender
bias within our judicial system. We find that current bias detectino methods in
NLP are insufficient to determine gender bias in our case law database and
propose an alternative approach. We show that existing algorithms' inconsistent
results are consequences of prior research's definition of biases themselves.
Bias detection algorithms rely on groups of words to represent bias (e.g.,
'salary,' 'job,' and 'boss' to represent employment as a potentially biased
theme against women in text). However, the methods to build these groups of
words have several weaknesses, primarily that the word lists are based on the
researchers' own intuitions. We suggest two new methods of automating the
creation of word lists to represent biases. We find that our methods outperform
current NLP bias detection methods. Our research improves the capabilities of
NLP technology to detect bias and highlights gender biases present in
influential case law. In order test our NLP bias detection method's
performance, we regress our results of bias in case law against U.S census data
of women's participation in the workforce in the last 100 years.
Related papers
- Disclosure and Mitigation of Gender Bias in LLMs [64.79319733514266]
Large Language Models (LLMs) can generate biased responses.
We propose an indirect probing framework based on conditional generation.
We explore three distinct strategies to disclose explicit and implicit gender bias in LLMs.
arXiv Detail & Related papers (2024-02-17T04:48:55Z) - Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous
Pronouns [53.62845317039185]
Bias-measuring datasets play a critical role in detecting biased behavior of language models.
We propose a novel method to collect diverse, natural, and minimally distant text pairs via counterfactual generation.
We show that four pre-trained language models are significantly more inconsistent across different gender groups than within each group.
arXiv Detail & Related papers (2023-02-11T12:11:03Z) - Causally Testing Gender Bias in LLMs: A Case Study on Occupational Bias [33.99768156365231]
We introduce a causal formulation for bias measurement in generative language models.
We propose a benchmark called OccuGender, with a bias-measuring procedure to investigate occupational gender bias.
The results show that these models exhibit substantial occupational gender bias.
arXiv Detail & Related papers (2022-12-20T22:41:24Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - A Survey on Gender Bias in Natural Language Processing [22.91475787277623]
We present a survey of 304 papers on gender bias in natural language processing.
We compare and contrast approaches to detecting and mitigating gender bias.
We find that research on gender bias suffers from four core limitations.
arXiv Detail & Related papers (2021-12-28T14:54:18Z) - Evaluating Gender Bias in Natural Language Inference [5.034017602990175]
We propose an evaluation methodology to measure gender bias in natural language understanding through inference.
We use our challenge task to investigate state-of-the-art NLI models on the presence of gender stereotypes using occupations.
Our findings suggest that three models trained on MNLI and SNLI datasets are significantly prone to gender-induced prediction errors.
arXiv Detail & Related papers (2021-05-12T09:41:51Z) - Robustness and Reliability of Gender Bias Assessment in Word Embeddings:
The Role of Base Pairs [23.574442657224008]
It has been shown that word embeddings can exhibit gender bias, and various methods have been proposed to quantify this.
Previous work has leveraged gender word pairs to measure bias and extract biased analogies.
We show that the reliance on these gendered pairs has strong limitations.
In particular, the well-known analogy "man is to computer-programmer as woman is to homemaker" is due to word similarity rather than societal bias.
arXiv Detail & Related papers (2020-10-06T16:09:05Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation [94.98656228690233]
We propose a technique that purifies the word embeddings against corpus regularities prior to inferring and removing the gender subspace.
Our approach preserves the distributional semantics of the pre-trained word embeddings while reducing gender bias to a significantly larger degree than prior approaches.
arXiv Detail & Related papers (2020-05-03T02:33:20Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z) - Unsupervised Discovery of Implicit Gender Bias [38.59057512390926]
We take an unsupervised approach to identifying gender bias against women at a comment level.
Our main challenge is forcing the model to focus on signs of implicit bias, rather than other artifacts in the data.
arXiv Detail & Related papers (2020-04-17T17:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.