Fairness constraint in Structural Econometrics and Application to fair
estimation using Instrumental Variables
- URL: http://arxiv.org/abs/2202.08977v1
- Date: Wed, 16 Feb 2022 15:34:07 GMT
- Title: Fairness constraint in Structural Econometrics and Application to fair
estimation using Instrumental Variables
- Authors: Samuele Centorrino and Jean-Pierre Florens and Jean-Michel Loubes
- Abstract summary: A supervised machine learning algorithm determines a model from a learning sample that will be used to predict new observations.
This information aggregation does not consider any potential selection on unobservables and any status-quo biases which may be contained in the training sample.
The latter bias has raised concerns around the so-called textitfairness of machine learning algorithms, especially towards disadvantaged groups.
- Score: 3.265773263570237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A supervised machine learning algorithm determines a model from a learning
sample that will be used to predict new observations. To this end, it
aggregates individual characteristics of the observations of the learning
sample. But this information aggregation does not consider any potential
selection on unobservables and any status-quo biases which may be contained in
the training sample. The latter bias has raised concerns around the so-called
\textit{fairness} of machine learning algorithms, especially towards
disadvantaged groups. In this chapter, we review the issue of fairness in
machine learning through the lenses of structural econometrics models in which
the unknown index is the solution of a functional equation and issues of
endogeneity are explicitly accounted for. We model fairness as a linear
operator whose null space contains the set of strictly {\it fair} indexes. A
{\it fair} solution is obtained by projecting the unconstrained index into the
null space of this operator or by directly finding the closest solution of the
functional equation into this null space. We also acknowledge that policymakers
may incur a cost when moving away from the status quo. Achieving
\textit{approximate fairness} is obtained by introducing a fairness penalty in
the learning procedure and balancing more or less heavily the influence between
the status quo and a full fair solution.
Related papers
- Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Generating collective counterfactual explanations in score-based
classification via mathematical optimization [4.281723404774889]
A counterfactual explanation of an instance indicates how this instance should be minimally modified so that the perturbed instance is classified in the desired class.
Most of the Counterfactual Analysis literature focuses on the single-instance single-counterfactual setting.
By means of novel Mathematical Optimization models, we provide a counterfactual explanation for each instance in a group of interest.
arXiv Detail & Related papers (2023-10-19T15:18:42Z) - LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations [1.0370398945228227]
We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
arXiv Detail & Related papers (2023-10-01T04:09:59Z) - LUCID-GAN: Conditional Generative Models to Locate Unfairness [1.5257247496416746]
We present LUCID-GAN, which generates canonical inputs via a conditional generative model instead of gradient-based inverse design.
We empirically evaluate LUCID-GAN on the UCI Adult and COMPAS data sets and show that it allows for detecting unethical biases in black-box models without requiring access to the training data.
arXiv Detail & Related papers (2023-07-28T10:37:49Z) - CLIMAX: An exploration of Classifier-Based Contrastive Explanations [5.381004207943597]
We propose a novel post-hoc model XAI technique that provides contrastive explanations justifying the classification of a black box.
Our method, which we refer to as CLIMAX, is based on local classifiers.
We show that we achieve better consistency as compared to baselines such as LIME, BayLIME, and SLIME.
arXiv Detail & Related papers (2023-07-02T22:52:58Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Learning Individually Fair Classifier with Path-Specific Causal-Effect
Constraint [31.86959207229775]
In this paper, we propose a framework for learning an individually fair classifier.
We define the it probability of individual unfairness (PIU) and solve an optimization problem where PIU's upper bound, which can be estimated from data, is controlled to be close to zero.
Experimental results show that our method can learn an individually fair classifier at a slight cost of accuracy.
arXiv Detail & Related papers (2020-02-17T02:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.