Group fairness without demographics using social networks
- URL: http://arxiv.org/abs/2305.11361v1
- Date: Fri, 19 May 2023 00:45:55 GMT
- Title: Group fairness without demographics using social networks
- Authors: David Liu, Virginie Do, Nicolas Usunier, Maximilian Nickel
- Abstract summary: Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability.
We propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks.
- Score: 29.073125057536014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Group fairness is a popular approach to prevent unfavorable treatment of
individuals based on sensitive attributes such as race, gender, and disability.
However, the reliance of group fairness on access to discrete group information
raises several limitations and concerns, especially with regard to privacy,
intersectionality, and unforeseen biases. In this work, we propose a
"group-free" measure of fairness that does not rely on sensitive attributes
and, instead, is based on homophily in social networks, i.e., the common
property that individuals sharing similar attributes are more likely to be
connected. Our measure is group-free as it avoids recovering any form of group
memberships and uses only pairwise similarities between individuals to define
inequality in outcomes relative to the homophily structure in the network. We
theoretically justify our measure by showing it is commensurate with the notion
of additive decomposability in the economic inequality literature and also
bound the impact of non-sensitive confounding attributes. Furthermore, we apply
our measure to develop fair algorithms for classification, maximizing
information access, and recommender systems. Our experimental results show that
the proposed approach can reduce inequality among protected classes without
knowledge of sensitive attribute labels. We conclude with a discussion of the
limitations of our approach when applied in real-world settings.
Related papers
- Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of
Demographic Data Collection in the Pursuit of Fairness [0.0]
We consider calls to collect more data on demographics to enable algorithmic fairness.
We show how these techniques largely ignore broader questions of data governance and systemic oppression.
arXiv Detail & Related papers (2022-04-18T04:50:09Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Causal Multi-Level Fairness [4.937180141196767]
We formalize the problem of multi-level fairness using tools from causal inference.
We show importance of the problem by illustrating residual unfairness if macro-level sensitive attributes are not accounted for.
arXiv Detail & Related papers (2020-10-14T18:26:17Z) - Distributional Individual Fairness in Clustering [7.303841123034983]
We introduce a framework for assigning individuals, embedded in a metric space, to probability distributions over a bounded number of cluster centers.
We provide an algorithm for clustering with $p$-norm objective and individual fairness constraints with provable approximation guarantee.
arXiv Detail & Related papers (2020-06-22T20:02:09Z) - Towards Robust Fine-grained Recognition by Maximal Separation of
Discriminative Features [72.72840552588134]
We identify the proximity of the latent representations of different classes in fine-grained recognition networks as a key factor to the success of adversarial attacks.
We introduce an attention-based regularization mechanism that maximally separates the discriminative latent features of different classes.
arXiv Detail & Related papers (2020-06-10T18:34:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.