Bounding and Approximating Intersectional Fairness through Marginal
Fairness
- URL: http://arxiv.org/abs/2206.05828v2
- Date: Fri, 23 Jun 2023 12:05:31 GMT
- Title: Bounding and Approximating Intersectional Fairness through Marginal
Fairness
- Authors: Mathieu Molina, Patrick Loiseau
- Abstract summary: Discrimination in machine learning often arises along multiple dimensions.
It is desirable to ensure emphintersectional fairness -- i.e., that no subgroup is discriminated against.
- Score: 7.954748673441148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discrimination in machine learning often arises along multiple dimensions
(a.k.a. protected attributes); it is then desirable to ensure
\emph{intersectional fairness} -- i.e., that no subgroup is discriminated
against. It is known that ensuring \emph{marginal fairness} for every dimension
independently is not sufficient in general. Due to the exponential number of
subgroups, however, directly measuring intersectional fairness from data is
impossible. In this paper, our primary goal is to understand in detail the
relationship between marginal and intersectional fairness through statistical
analysis. We first identify a set of sufficient conditions under which an exact
relationship can be obtained. Then, we prove bounds (easily computable through
marginal fairness and other meaningful statistical quantities) in
high-probability on intersectional fairness in the general case. Beyond their
descriptive value, we show that these theoretical bounds can be leveraged to
derive a heuristic improving the approximation and bounds of intersectional
fairness by choosing, in a relevant manner, protected attributes for which we
describe intersectional subgroups. Finally, we test the performance of our
approximations and bounds on real and synthetic data-sets.
Related papers
- Intrinsic Fairness-Accuracy Tradeoffs under Equalized Odds [8.471466670802817]
We study the tradeoff between fairness and accuracy under the statistical notion of equalized odds.
We present a new upper bound on the accuracy as a function of the fairness budget.
Our results show that achieving high accuracy subject to a low-bias could be fundamentally limited based on the statistical disparity across the groups.
arXiv Detail & Related papers (2024-05-12T23:15:21Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Intersectional Fairness: A Fractal Approach [0.0]
We frame the problem of intersectional fairness within a geometrical setting.
We prove mathematically that, while fairness does not propagate "down" the levels, it does propagate "up" the levels.
We propose that fairness can be metaphorically thought of as a "fractal" problem.
arXiv Detail & Related papers (2023-02-24T15:15:32Z) - The Possibility of Fairness: Revisiting the Impossibility Theorem in
Practice [5.175941513195566]
We show that it is possible to identify a large set of models that satisfy seemingly incompatible fairness constraints.
We offer tools and guidance for practitioners to understand when -- and to what degree -- fairness along multiple criteria can be achieved.
arXiv Detail & Related papers (2023-02-13T13:29:24Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Relational Proxies: Emergent Relationships as Fine-Grained
Discriminators [52.17542855760418]
We propose a novel approach that leverages information between the global and local part of an object for encoding its label.
We design Proxies based on our theoretical findings and evaluate it on seven challenging fine-grained benchmark datasets.
We also experimentally validate our theory and obtain consistent results across multiple benchmarks.
arXiv Detail & Related papers (2022-10-05T11:08:04Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z) - Fast Fair Regression via Efficient Approximations of Mutual Information [0.0]
This paper introduces fast approximations of the independence, separation and sufficiency group fairness criteria for regression models.
It uses such approximations as regularisers to enforce fairness within a regularised risk minimisation framework.
Experiments in real-world datasets indicate that in spite of its superior computational efficiency our algorithm still displays state-of-the-art accuracy/fairness tradeoffs.
arXiv Detail & Related papers (2020-02-14T08:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.