Intersectional Fairness: A Fractal Approach
- URL: http://arxiv.org/abs/2302.12683v1
- Date: Fri, 24 Feb 2023 15:15:32 GMT
- Title: Intersectional Fairness: A Fractal Approach
- Authors: Giulio Filippi, Sara Zannone, Adriano Koshiyama
- Abstract summary: We frame the problem of intersectional fairness within a geometrical setting.
We prove mathematically that, while fairness does not propagate "down" the levels, it does propagate "up" the levels.
We propose that fairness can be metaphorically thought of as a "fractal" problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The issue of fairness in AI has received an increasing amount of attention in
recent years. The problem can be approached by looking at different protected
attributes (e.g., ethnicity, gender, etc) independently, but fairness for
individual protected attributes does not imply intersectional fairness. In this
work, we frame the problem of intersectional fairness within a geometrical
setting. We project our data onto a hypercube, and split the analysis of
fairness by levels, where each level encodes the number of protected attributes
we are intersecting over. We prove mathematically that, while fairness does not
propagate "down" the levels, it does propagate "up" the levels. This means that
ensuring fairness for all subgroups at the lowest intersectional level (e.g.,
black women, white women, black men and white men), will necessarily result in
fairness for all the above levels, including each of the protected attributes
(e.g., ethnicity and gender) taken independently. We also derive a formula
describing the variance of the set of estimated success rates on each level,
under the assumption of perfect fairness. Using this theoretical finding as a
benchmark, we define a family of metrics which capture overall intersectional
bias. Finally, we propose that fairness can be metaphorically thought of as a
"fractal" problem. In fractals, patterns at the smallest scale repeat at a
larger scale. We see from this example that tackling the problem at the lowest
possible level, in a bottom-up manner, leads to the natural emergence of fair
AI. We suggest that trustworthiness is necessarily an emergent, fractal and
relational property of the AI system.
Related papers
- Implementing Fairness: the view from a FairDream [0.0]
We train an AI model and develop our own fairness package FairDream to detect inequalities and then to correct for them.
Our experiments show that it is a property of FairDream to fulfill fairness objectives which are conditional on the ground truth.
arXiv Detail & Related papers (2024-07-20T06:06:24Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Counterfactual Fairness Is Basically Demographic Parity [0.0]
Making fair decisions is crucial to ethically implementing machine learning algorithms in social settings.
We show that an algorithm which satisfies counterfactual fairness also satisfies demographic parity.
We formalize a concrete fairness goal: to preserve the order of individuals within protected groups.
arXiv Detail & Related papers (2022-08-07T23:38:59Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Bounding and Approximating Intersectional Fairness through Marginal
Fairness [7.954748673441148]
Discrimination in machine learning often arises along multiple dimensions.
It is desirable to ensure emphintersectional fairness -- i.e., that no subgroup is discriminated against.
arXiv Detail & Related papers (2022-06-12T19:53:34Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.