Detecting Racial Bias in Jury Selection
- URL: http://arxiv.org/abs/2103.11852v1
- Date: Mon, 22 Mar 2021 13:47:33 GMT
- Title: Detecting Racial Bias in Jury Selection
- Authors: Jack Dunn and Ying Daisy Zhuo
- Abstract summary: APM Reports collated historical court records to assess whether the State exhibited a racial bias in striking potential jurors.
This analysis used backward stepwise logistic regression to conclude that race was a significant factor.
We apply Optimal Feature Selection to identify the globally-optimal subset of features and affirm that there is significant evidence of racial bias in the strike decisions.
- Score: 0.7106986689736826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To support the 2019 U.S. Supreme Court case "Flowers v. Mississippi", APM
Reports collated historical court records to assess whether the State exhibited
a racial bias in striking potential jurors. This analysis used backward
stepwise logistic regression to conclude that race was a significant factor,
however this method for selecting relevant features is only a heuristic, and
additionally cannot consider interactions between features. We apply Optimal
Feature Selection to identify the globally-optimal subset of features and
affirm that there is significant evidence of racial bias in the strike
decisions. We also use Optimal Classification Trees to segment the juror
population subgroups with similar characteristics and probability of being
struck, and find that three of these subgroups exhibit significant racial
disparity in strike rate, pinpointing specific areas of bias in the dataset.
Related papers
- A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems [13.277413612930102]
We present a multi-stage causal framework incorporating criminality.
In settings like airport security, the primary source of observed bias against a race is likely to be bias in law enforcement against innocents of that race.
In police-civilian interaction, the primary source of observed bias against a race could be bias in law enforcement against that race or bias from the general public in reporting against the other race.
arXiv Detail & Related papers (2024-02-22T20:41:43Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Estimating Racial Disparities When Race is Not Observed [3.0931877196387196]
We introduce a new class of models that produce racial disparity estimates by using surnames as an instrumental variable for race.
A validation study based on the North Carolina voter file shows that BISG+BIRDiE reduces error by up to 84% when estimating racial differences in party registration.
We apply the proposed methodology to estimate racial differences in who benefits from the home mortgage interest deduction using individual-level tax data from the U.S. Internal Revenue Service.
arXiv Detail & Related papers (2023-03-05T04:46:16Z) - Race Bias Analysis of Bona Fide Errors in face anti-spoofing [0.0]
We present a systematic study of race bias in face anti-spoofing with three key characteristics.
The focus is on analysing potential bias in the bona fide errors, where significant ethical and legal issues lie.
We demonstrate the proposed bias analysis process on a VQ-VAE based face anti-spoofing algorithm.
arXiv Detail & Related papers (2022-10-11T11:49:24Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Mitigating Racial Biases in Toxic Language Detection with an
Equity-Based Ensemble Framework [9.84413545378636]
Recent research has demonstrated how racial biases against users who write African American English exist in popular toxic language datasets.
We propose additional descriptive fairness metrics to better understand the source of these biases.
We show that our proposed framework substantially reduces the racial biases that the model learns from these datasets.
arXiv Detail & Related papers (2021-09-27T15:54:05Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - One Label, One Billion Faces: Usage and Consistency of Racial Categories
in Computer Vision [75.82110684355979]
We study the racial system encoded by computer vision datasets supplying categorical race labels for face images.
We find that each dataset encodes a substantially unique racial system, despite nominally equivalent racial categories.
We find evidence that racial categories encode stereotypes, and exclude ethnic groups from categories on the basis of nonconformity to stereotypes.
arXiv Detail & Related papers (2021-02-03T22:50:04Z) - Quota-based debiasing can decrease representation of already
underrepresented groups [5.1135133995376085]
We show that quota-based debiasing based on a single attribute can worsen the representation of already underrepresented groups and decrease overall fairness of selection.
Our results demonstrate the importance of including all relevant attributes in debiasing procedures and that more efforts need to be put into eliminating the root causes of inequalities.
arXiv Detail & Related papers (2020-06-13T14:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.