Selective Regression Under Fairness Criteria
- URL: http://arxiv.org/abs/2110.15403v1
- Date: Thu, 28 Oct 2021 19:05:12 GMT
- Title: Selective Regression Under Fairness Criteria
- Authors: Abhin Shah, Yuheng Bu, Joshua Ka-Wing Lee, Subhro Das, Rameswar Panda,
Prasanna Sattigeri, Gregory W. Wornell
- Abstract summary: In some cases, the performance of minority group can decrease while we reduce the coverage.
We show that such an unwanted behavior can be avoided if we can construct features satisfying the sufficiency criterion.
- Score: 30.672082160544996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Selective regression allows abstention from prediction if the confidence to
make an accurate prediction is not sufficient. In general, by allowing a reject
option, one expects the performance of a regression model to increase at the
cost of reducing coverage (i.e., by predicting fewer samples). However, as
shown in this work, in some cases, the performance of minority group can
decrease while we reduce the coverage, and thus selective regression can
magnify disparities between different sensitive groups. We show that such an
unwanted behavior can be avoided if we can construct features satisfying the
sufficiency criterion, so that the mean prediction and the associated
uncertainty are calibrated across all the groups. Further, to mitigate the
disparity in the performance across groups, we introduce two approaches based
on this calibration criterion: (a) by regularizing an upper bound of
conditional mutual information under a Gaussian assumption and (b) by
regularizing a contrastive loss for mean and uncertainty prediction. The
effectiveness of these approaches are demonstrated on synthetic as well as
real-world datasets.
Related papers
- Calibrated Probabilistic Forecasts for Arbitrary Sequences [58.54729945445505]
Real-world data streams can change unpredictably due to distribution shifts, feedback loops and adversarial actors.
We present a forecasting framework ensuring valid uncertainty estimates regardless of how data evolves.
arXiv Detail & Related papers (2024-09-27T21:46:42Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Conformal Prediction with Missing Values [19.18178194789968]
We first show that the marginal coverage guarantee of conformal prediction holds on imputed data for any missingness distribution.
We then show that a universally consistent quantile regression algorithm trained on the imputed data is Bayes optimal for the pinball risk.
arXiv Detail & Related papers (2023-06-05T09:28:03Z) - Distribution-Free Finite-Sample Guarantees and Split Conformal
Prediction [0.0]
split conformal prediction represents a promising avenue to obtain finite-sample guarantees under minimal distribution-free assumptions.
We highlight the connection between split conformal prediction and classical tolerance predictors developed in the 1940s.
arXiv Detail & Related papers (2022-10-26T14:12:24Z) - Ensembling over Classifiers: a Bias-Variance Perspective [13.006468721874372]
We build upon the extension to the bias-variance decomposition by Pfau (2013) in order to gain crucial insights into the behavior of ensembles of classifiers.
We show that conditional estimates necessarily incur an irreducible error.
Empirically, standard ensembling reducesthe bias, leading us to hypothesize that ensembles of classifiers may perform well in part because of this unexpected reduction.
arXiv Detail & Related papers (2022-06-21T17:46:35Z) - Selective Ensembles for Consistent Predictions [19.154189897847804]
inconsistency is undesirable in high-stakes contexts.
We show that this inconsistency extends beyond predictions to feature attributions.
We prove that selective ensembles achieve consistent predictions and feature attributions while maintaining low abstention rates.
arXiv Detail & Related papers (2021-11-16T05:03:56Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Fairness Measures for Regression via Probabilistic Classification [0.0]
Algorithmic fairness involves expressing notions such as equity, or reasonable treatment, as quantifiable measures that a machine learning algorithm can optimise.
This is in part because classification fairness measures are easily computed by comparing the rates of outcomes, leading to behaviours such as ensuring the same fraction of eligible men are selected as eligible women.
But such measures are computationally difficult to generalise to the continuous regression setting for problems such as pricing, or allocating payments.
For the regression setting we introduce tractable approximations of the independence, separation and sufficiency criteria by observing that they factorise as ratios of different conditional probabilities of the protected attributes.
arXiv Detail & Related papers (2020-01-16T21:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.