Testing for racial bias using inconsistent perceptions of race
- URL: http://arxiv.org/abs/2409.11269v1
- Date: Tue, 17 Sep 2024 15:18:46 GMT
- Title: Testing for racial bias using inconsistent perceptions of race
- Authors: Nora Gera, Emma Pierson,
- Abstract summary: Tests for racial bias commonly assess whether two people of different races are treated differently.
A fundamental challenge is that, because two people may differ in many ways, factors besides race might explain differences in treatment.
We propose a test for bias which circumvents the difficulty of comparing two people by instead assessing whether the same person is treated differently when their race is perceived differently.
- Score: 1.0090972954941624
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tests for racial bias commonly assess whether two people of different races are treated differently. A fundamental challenge is that, because two people may differ in many ways, factors besides race might explain differences in treatment. Here, we propose a test for bias which circumvents the difficulty of comparing two people by instead assessing whether the $\textit{same person}$ is treated differently when their race is perceived differently. We apply our method to test for bias in police traffic stops, finding that the same driver is likelier to be searched or arrested by police when they are perceived as Hispanic than when they are perceived as white. Our test is broadly applicable to other datasets where race, gender, or other identity data are perceived rather than self-reported, and the same person is observed multiple times.
Related papers
- The Factuality Tax of Diversity-Intervened Text-to-Image Generation: Benchmark and Fact-Augmented Intervention [61.80236015147771]
We quantify the trade-off between using diversity interventions and preserving demographic factuality in T2I models.
Experiments on DoFaiR reveal that diversity-oriented instructions increase the number of different gender and racial groups.
We propose Fact-Augmented Intervention (FAI) to reflect on verbalized or retrieved factual information about gender and racial compositions of generation subjects in history.
arXiv Detail & Related papers (2024-06-29T09:09:42Z) - A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems [13.277413612930102]
We present a multi-stage causal framework incorporating criminality.
In settings like airport security, the primary source of observed bias against a race is likely to be bias in law enforcement against innocents of that race.
In police-civilian interaction, the primary source of observed bias against a race could be bias in law enforcement against that race or bias from the general public in reporting against the other race.
arXiv Detail & Related papers (2024-02-22T20:41:43Z) - Comparing Biases and the Impact of Multilingual Training across Multiple
Languages [70.84047257764405]
We present a bias analysis across Italian, Chinese, English, Hebrew, and Spanish on the downstream sentiment analysis task.
We adapt existing sentiment bias templates in English to Italian, Chinese, Hebrew, and Spanish for four attributes: race, religion, nationality, and gender.
Our results reveal similarities in bias expression such as favoritism of groups that are dominant in each language's culture.
arXiv Detail & Related papers (2023-05-18T18:15:07Z) - Studying Bias in GANs through the Lens of Race [91.95264864405493]
We study how the performance and evaluation of generative image models are impacted by the racial composition of their training datasets.
Our results show that the racial compositions of generated images successfully preserve that of the training data.
However, we observe that truncation, a technique used to generate higher quality images during inference, exacerbates racial imbalances in the data.
arXiv Detail & Related papers (2022-09-06T22:25:56Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - An Examination of Fairness of AI Models for Deepfake Detection [5.4852920337961235]
We evaluate bias present in deepfake datasets and detection models across protected subgroups.
Using facial datasets balanced by race and gender, we examine three popular deepfake detectors and find large disparities in predictive performances across races.
arXiv Detail & Related papers (2021-05-02T21:55:04Z) - Avoiding bias when inferring race using name-based approaches [0.8543368663496084]
We use information from the U.S. Census and mortgage applications to infer the race of U.S. affiliated authors in the Web of Science.
Our results demonstrate that the validity of name based inference varies by race/ethnicity and that threshold approaches underestimate Black authors and overestimate White authors.
arXiv Detail & Related papers (2021-04-14T08:36:22Z) - One Label, One Billion Faces: Usage and Consistency of Racial Categories
in Computer Vision [75.82110684355979]
We study the racial system encoded by computer vision datasets supplying categorical race labels for face images.
We find that each dataset encodes a substantially unique racial system, despite nominally equivalent racial categories.
We find evidence that racial categories encode stereotypes, and exclude ethnic groups from categories on the basis of nonconformity to stereotypes.
arXiv Detail & Related papers (2021-02-03T22:50:04Z) - The role of collider bias in understanding statistics on racially biased
policing [0.0]
Contradictory conclusions have been made about whether unarmed blacks are more likely to be shot by police than unarmed whites using the same data.
We provide a causal Bayesian network model to explain this bias, which is called collider bias or Berkson's paradox.
arXiv Detail & Related papers (2020-07-16T15:26:23Z) - Towards Controllable Biases in Language Generation [87.89632038677912]
We develop a method to induce societal biases in generated text when input prompts contain mentions of specific demographic groups.
We analyze two scenarios: 1) inducing negative biases for one demographic and positive biases for another demographic, and 2) equalizing biases between demographics.
arXiv Detail & Related papers (2020-05-01T08:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.