Causal Multi-Level Fairness
- URL: http://arxiv.org/abs/2010.07343v3
- Date: Wed, 12 May 2021 18:48:01 GMT
- Title: Causal Multi-Level Fairness
- Authors: Vishwali Mhasawade and Rumi Chunara
- Abstract summary: We formalize the problem of multi-level fairness using tools from causal inference.
We show importance of the problem by illustrating residual unfairness if macro-level sensitive attributes are not accounted for.
- Score: 4.937180141196767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic systems are known to impact marginalized groups severely, and
more so, if all sources of bias are not considered. While work in algorithmic
fairness to-date has primarily focused on addressing discrimination due to
individually linked attributes, social science research elucidates how some
properties we link to individuals can be conceptualized as having causes at
macro (e.g. structural) levels, and it may be important to be fair to
attributes at multiple levels. For example, instead of simply considering race
as a causal, protected attribute of an individual, the cause may be distilled
as perceived racial discrimination an individual experiences, which in turn can
be affected by neighborhood-level factors. This multi-level conceptualization
is relevant to questions of fairness, as it may not only be important to take
into account if the individual belonged to another demographic group, but also
if the individual received advantaged treatment at the macro-level. In this
paper, we formalize the problem of multi-level fairness using tools from causal
inference in a manner that allows one to assess and account for effects of
sensitive attributes at multiple levels. We show importance of the problem by
illustrating residual unfairness if macro-level sensitive attributes are not
accounted for, or included without accounting for their multi-level nature.
Further, in the context of a real-world task of predicting income based on
macro and individual-level attributes, we demonstrate an approach for
mitigating unfairness, a result of multi-level sensitive attributes.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Fair Models in Credit: Intersectional Discrimination and the
Amplification of Inequity [5.333582981327497]
The authors demonstrate the impact of such algorithmic bias in the microfinance context.
We find that in addition to legally protected characteristics, sensitive attributes such as single parent status and number of children can result in imbalanced harm.
arXiv Detail & Related papers (2023-08-01T10:34:26Z) - Group fairness without demographics using social networks [29.073125057536014]
Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability.
We propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks.
arXiv Detail & Related papers (2023-05-19T00:45:55Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of
Demographic Data Collection in the Pursuit of Fairness [0.0]
We consider calls to collect more data on demographics to enable algorithmic fairness.
We show how these techniques largely ignore broader questions of data governance and systemic oppression.
arXiv Detail & Related papers (2022-04-18T04:50:09Z) - Fair Tree Learning [0.15229257192293202]
Various optimisation criteria combine classification performance with a fairness metric.
Current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric.
We propose a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion AUC for Fairness.
arXiv Detail & Related papers (2021-10-18T13:40:25Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Towards Robust Fine-grained Recognition by Maximal Separation of
Discriminative Features [72.72840552588134]
We identify the proximity of the latent representations of different classes in fine-grained recognition networks as a key factor to the success of adversarial attacks.
We introduce an attention-based regularization mechanism that maximally separates the discriminative latent features of different classes.
arXiv Detail & Related papers (2020-06-10T18:34:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.