Auditing Fairness under Unobserved Confounding
- URL: http://arxiv.org/abs/2403.14713v2
- Date: Thu, 25 Apr 2024 02:56:14 GMT
- Title: Auditing Fairness under Unobserved Confounding
- Authors: Yewon Byun, Dylan Sam, Michael Oberst, Zachary C. Lipton, Bryan Wilder,
- Abstract summary: We show that we can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed.
This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner.
- Score: 56.61738581796362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The presence of inequity is a fundamental problem in the outcomes of decision-making systems, especially when human lives are at stake. Yet, estimating notions of unfairness or inequity is difficult, particularly if they rely on hard-to-measure concepts such as risk. Such measurements of risk can be accurately obtained when no unobserved confounders have jointly influenced past decisions and outcomes. However, in the real world, this assumption rarely holds. In this paper, we show a surprising result that one can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed. We use the fact that in many real-world settings (e.g., the release of a new treatment) we have data from prior to any allocation to derive unbiased estimates of risk. This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner. For instance, in a real-world study of Paxlovid allocation, our framework provably identifies that observed racial inequity cannot be explained by unobserved confounders of the same strength as important observed covariates.
Related papers
- FairlyUncertain: A Comprehensive Benchmark of Uncertainty in Algorithmic Fairness [4.14360329494344]
We introduce FairlyUncertain, an axiomatic benchmark for evaluating uncertainty estimates in fairness.
Our benchmark posits that fair predictive uncertainty estimates should be consistent across learning pipelines and calibrated to observed randomness.
arXiv Detail & Related papers (2024-10-02T20:15:29Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - The Unfairness of $\varepsilon$-Fairness [0.0]
We show that if the concept of $varepsilon$-fairness is employed, it can possibly lead to outcomes that are maximally unfair in the real-world context.
We illustrate our findings with two real-world examples: college admissions and credit risk assessment.
arXiv Detail & Related papers (2024-05-15T14:13:35Z) - Data-Adaptive Tradeoffs among Multiple Risks in Distribution-Free Prediction [55.77015419028725]
We develop methods that permit valid control of risk when threshold and tradeoff parameters are chosen adaptively.
Our methodology supports monotone and nearly-monotone risks, but otherwise makes no distributional assumptions.
arXiv Detail & Related papers (2024-03-28T17:28:06Z) - What's the Harm? Sharp Bounds on the Fraction Negatively Affected by
Treatment [58.442274475425144]
We develop a robust inference algorithm that is efficient almost regardless of how and how fast these functions are learned.
We demonstrate our method in simulation studies and in a case study of career counseling for the unemployed.
arXiv Detail & Related papers (2022-05-20T17:36:33Z) - Treatment Effect Risk: Bounds and Inference [58.442274475425144]
Since the average treatment effect measures the change in social welfare, even if positive, there is a risk of negative effect on, say, some 10% of the population.
In this paper we consider how to nonetheless assess this important risk measure, formalized as the conditional value at risk (CVaR) of the ITE distribution.
Some bounds can also be interpreted as summarizing a complex CATE function into a single metric and are of interest independently of being a bound.
arXiv Detail & Related papers (2022-01-15T17:21:26Z) - Enabling risk-aware Reinforcement Learning for medical interventions
through uncertainty decomposition [9.208828373290487]
Reinforcement Learning (RL) is emerging as tool for tackling complex control and decision-making problems.
It is often challenging to bridge the gap between an apparently optimal policy learnt by an agent and its real-world deployment.
Here we propose how a distributional approach (UA-DQN) can be recast to render uncertainties by decomposing the net effects of each uncertainty.
arXiv Detail & Related papers (2021-09-16T09:36:53Z) - Feedback Effects in Repeat-Use Criminal Risk Assessments [0.0]
We show that risk can propagate over sequential decisions in ways that are not captured by one-shot tests.
Risk assessment tools operate in a highly complex and path-dependent process, fraught with historical inequity.
arXiv Detail & Related papers (2020-11-28T06:40:05Z) - Temporal Difference Uncertainties as a Signal for Exploration [76.6341354269013]
An effective approach to exploration in reinforcement learning is to rely on an agent's uncertainty over the optimal policy.
In this paper, we highlight that value estimates are easily biased and temporally inconsistent.
We propose a novel method for estimating uncertainty over the value function that relies on inducing a distribution over temporal difference errors.
arXiv Detail & Related papers (2020-10-05T18:11:22Z) - Off-policy Policy Evaluation For Sequential Decisions Under Unobserved
Confounding [33.58862183373374]
We assess robustness of OPE methods under unobserved confounding.
We show that even small amounts of per-decision confounding can heavily bias OPE methods.
We propose an efficient loss-minimization-based procedure for computing worst-case bounds.
arXiv Detail & Related papers (2020-03-12T05:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.