Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Making
- URL: http://arxiv.org/abs/2405.15446v1
- Date: Fri, 24 May 2024 11:22:19 GMT
- Title: Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Making
- Authors: Drago Plecko, Elias Bareinboim,
- Abstract summary: We introduce the notion of a margin complement, which measures how much a prediction score $S$ changes due to a thresholding operation.
We show that under suitable causal assumptions, the influences of $X$ on the prediction score $S$ are equal to the influences of $X$ on the true outcome $Y$.
- Score: 58.06306331390586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Investigating fairness and equity of automated systems has become a critical field of inquiry. Most of the literature in fair machine learning focuses on defining and achieving fairness criteria in the context of prediction, while not explicitly focusing on how these predictions may be used later on in the pipeline. For instance, if commonly used criteria, such as independence or sufficiency, are satisfied for a prediction score $S$ used for binary classification, they need not be satisfied after an application of a simple thresholding operation on $S$ (as commonly used in practice). In this paper, we take an important step to address this issue in numerous statistical and causal notions of fairness. We introduce the notion of a margin complement, which measures how much a prediction score $S$ changes due to a thresholding operation. We then demonstrate that the marginal difference in the optimal 0/1 predictor $\widehat Y$ between groups, written $P(\hat y \mid x_1) - P(\hat y \mid x_0)$, can be causally decomposed into the influences of $X$ on the $L_2$-optimal prediction score $S$ and the influences of $X$ on the margin complement $M$, along different causal pathways (direct, indirect, spurious). We then show that under suitable causal assumptions, the influences of $X$ on the prediction score $S$ are equal to the influences of $X$ on the true outcome $Y$. This yields a new decomposition of the disparity in the predictor $\widehat Y$ that allows us to disentangle causal differences inherited from the true outcome $Y$ that exists in the real world vs. those coming from the optimization procedure itself. This observation highlights the need for more regulatory oversight due to the potential for bias amplification, and to address this issue we introduce new notions of weak and strong business necessity, together with an algorithm for assessing whether these notions are satisfied.
Related papers
- Fair Secretaries with Unfair Predictions [12.756552522270198]
We show that an algorithm can have zero probability of accepting the best candidate, which we deem unfair, despite promising to accept a candidate whose expected value is at least $maxOmega (1), 1 - O(epsilon)$ times the optimal value, where $epsilon$ is the prediction error.
Our algorithm and analysis are based on a new "pegging" idea that diverges from existing works and simplifies/unifies some of their results.
arXiv Detail & Related papers (2024-11-15T00:23:59Z) - Cross-validating causal discovery via Leave-One-Variable-Out [11.891940572224783]
We use the "Leave-One-Variable-Out (LOVO)" prediction where $Y$ is inferred from $X$ without any joint observations of $X$ and $Y$.
We demonstrate that causal models on the two subsets, in the form of Acyclic Directed Mixed Graphs (ADMGs), often entail conclusions on the dependencies between $X$ and $Y$.
The prediction error can then be estimated since the joint distribution $P(X, Y)$ is assumed to be available, and $X$ and $Y$ have only been omitted for the purpose of fal
arXiv Detail & Related papers (2024-11-08T15:15:34Z) - MAC Advice for Facility Location Mechanism Design [12.136427429093395]
We study the $k$-facility location mechanism design problem.
Unlike previous models, where predictions are for the $k$ optimal facility locations, we receive $n$ predictions for the locations of each of the agents.
Some $delta$-fraction of the predicted locations are allowed to be arbitrarily incorrect, and the remainder of the predictions are allowed to be correct up to an $varepsilon$-error.
arXiv Detail & Related papers (2024-03-18T18:52:04Z) - $L^1$ Estimation: On the Optimality of Linear Estimators [64.76492306585168]
This work shows that the only prior distribution on $X$ that induces linearity in the conditional median is Gaussian.
In particular, it is demonstrated that if the conditional distribution $P_X|Y=y$ is symmetric for all $y$, then $X$ must follow a Gaussian distribution.
arXiv Detail & Related papers (2023-09-17T01:45:13Z) - Autoregressive Bandits [58.46584210388307]
We propose a novel online learning setting, Autoregressive Bandits, in which the observed reward is governed by an autoregressive process of order $k$.
We show that, under mild assumptions on the reward process, the optimal policy can be conveniently computed.
We then devise a new optimistic regret minimization algorithm, namely, AutoRegressive Upper Confidence Bound (AR-UCB), that suffers sublinear regret of order $widetildemathcalO left( frac(k+1)3/2sqrtnT (1-G
arXiv Detail & Related papers (2022-12-12T21:37:36Z) - Online Optimization with Untrusted Predictions [7.895232155155041]
We study the problem of online optimization, where a decision maker must choose points in a general metric space to the sum of per-round, non-competitive hitting costs and the costs of switching between rounds.
We propose a novel algorithm, Adaptive Online Switching (AOS), and prove that, for any desired $delta 0)$competitive if predictions are perfect, it is $tildecalO(alphadelta)$ even when predictions are inaccurate.
arXiv Detail & Related papers (2022-02-07T21:08:02Z) - Evaluating Probabilistic Inference in Deep Learning: Beyond Marginal
Predictions [37.92747047688873]
We show that the commonly-used $tau=1$ can be insufficient to drive good decisions in many settings of interest.
We also show that, as $tau$ grows, performing well according to $mathbfd_mathrmKLtau$ recovers universal guarantees for any possible decision.
arXiv Detail & Related papers (2021-07-20T01:55:01Z) - Almost Tight L0-norm Certified Robustness of Top-k Predictions against
Adversarial Perturbations [78.23408201652984]
Top-k predictions are used in many real-world applications such as machine learning as a service, recommender systems, and web searches.
Our work is based on randomized smoothing, which builds a provably robust classifier via randomizing an input.
For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69.2% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.
arXiv Detail & Related papers (2020-11-15T21:34:44Z) - Consistent Structured Prediction with Max-Min Margin Markov Networks [84.60515484036239]
Max-margin methods for binary classification have been extended to the structured prediction setting under the name of max-margin Markov networks ($M3N$)
We overcome such limitations by defining the learning problem in terms of a "max-min" margin formulation, naming the resulting method max-min margin Markov networks ($M4N$)
Experiments on multi-class classification, ordinal regression, sequence prediction and ranking demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-02T10:48:42Z) - Sharp Statistical Guarantees for Adversarially Robust Gaussian
Classification [54.22421582955454]
We provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification.
Results are stated in terms of the Adversarial Signal-to-Noise Ratio (AdvSNR), which generalizes a similar notion for standard linear classification to the adversarial setting.
arXiv Detail & Related papers (2020-06-29T21:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.