The role of collider bias in understanding statistics on racially biased
policing
- URL: http://arxiv.org/abs/2007.08406v1
- Date: Thu, 16 Jul 2020 15:26:23 GMT
- Title: The role of collider bias in understanding statistics on racially biased
policing
- Authors: Norman Fenton, Martin Neil, Steven Frazier
- Abstract summary: Contradictory conclusions have been made about whether unarmed blacks are more likely to be shot by police than unarmed whites using the same data.
We provide a causal Bayesian network model to explain this bias, which is called collider bias or Berkson's paradox.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contradictory conclusions have been made about whether unarmed blacks are
more likely to be shot by police than unarmed whites using the same data. The
problem is that, by relying only on data of 'police encounters', there is the
possibility that genuine bias can be hidden. We provide a causal Bayesian
network model to explain this bias, which is called collider bias or Berkson's
paradox, and show how the different conclusions arise from the same model and
data. We also show that causal Bayesian networks provide the ideal formalism
for considering alternative hypotheses and explanations of bias.
Related papers
- "Patriarchy Hurts Men Too." Does Your Model Agree? A Discussion on Fairness Assumptions [3.706222947143855]
In the context of group fairness, this approach often obscures implicit assumptions about how bias is introduced into the data.
We are assuming that the biasing process is a monotonic function of the fair scores, dependent solely on the sensitive attribute.
Either the behavior of the biasing process is more complex than mere monotonicity, which means we need to identify and reject our implicit assumptions.
arXiv Detail & Related papers (2024-08-01T07:06:30Z) - A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems [13.277413612930102]
We present a multi-stage causal framework incorporating criminality.
In settings like airport security, the primary source of observed bias against a race is likely to be bias in law enforcement against innocents of that race.
In police-civilian interaction, the primary source of observed bias against a race could be bias in law enforcement against that race or bias from the general public in reporting against the other race.
arXiv Detail & Related papers (2024-02-22T20:41:43Z) - Robustly Improving Bandit Algorithms with Confounded and Selection
Biased Offline Data: A Causal Approach [18.13887411913371]
This paper studies bandit problems where an agent has access to offline data that might be utilized to potentially improve the estimation of each arm's reward distribution.
We categorize the biases into confounding bias and selection bias based on the causal structure they imply.
We extract the causal bound for each arm that is robust towards compound biases from biased observational data.
arXiv Detail & Related papers (2023-12-20T03:03:06Z) - It's an Alignment, Not a Trade-off: Revisiting Bias and Variance in Deep
Models [51.66015254740692]
We show that for an ensemble of deep learning based classification models, bias and variance are emphaligned at a sample level.
We study this phenomenon from two theoretical perspectives: calibration and neural collapse.
arXiv Detail & Related papers (2023-10-13T17:06:34Z) - It's All Relative: Interpretable Models for Scoring Bias in Documents [10.678219157857946]
We propose an interpretable model to score the bias present in web documents, based only on their textual content.
Our model incorporates assumptions reminiscent of the Bradley-Terry axioms and is trained on pairs of revisions of the same Wikipedia article.
We show that we can interpret the parameters of the trained model to discover the words most indicative of bias.
arXiv Detail & Related papers (2023-07-16T19:35:38Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Unbiased Supervised Contrastive Learning [10.728852691100338]
In this work, we tackle the problem of learning representations that are robust to biases.
We first present a margin-based theoretical framework that allows us to clarify why recent contrastive losses can fail when dealing with biased data.
We derive a novel formulation of the supervised contrastive loss (epsilon-SupInfoNCE), providing more accurate control of the minimal distance between positive and negative samples.
Thanks to our theoretical framework, we also propose FairKL, a new debiasing regularization loss, that works well even with extremely biased data.
arXiv Detail & Related papers (2022-11-10T13:44:57Z) - Reconciling Individual Probability Forecasts [78.0074061846588]
We show that two parties who agree on the data cannot disagree on how to model individual probabilities.
We conclude that although individual probabilities are unknowable, they are contestable via a computationally and data efficient process.
arXiv Detail & Related papers (2022-09-04T20:20:35Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - The effect of differential victim crime reporting on predictive policing
systems [84.86615754515252]
We show how differential victim crime reporting rates can lead to outcome disparities in common crime hot spot prediction models.
Our results suggest that differential crime reporting rates can lead to a displacement of predicted hotspots from high crime but low reporting areas to high or medium crime and high reporting areas.
arXiv Detail & Related papers (2021-01-30T01:57:22Z) - UnQovering Stereotyping Biases via Underspecified Questions [68.81749777034409]
We present UNQOVER, a framework to probe and quantify biases through underspecified questions.
We show that a naive use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors.
We use this metric to analyze four important classes of stereotypes: gender, nationality, ethnicity, and religion.
arXiv Detail & Related papers (2020-10-06T01:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.