Structural Group Unfairness: Measurement and Mitigation by means of the Effective Resistance
- URL: http://arxiv.org/abs/2305.03223v3
- Date: Fri, 22 Nov 2024 15:46:06 GMT
- Title: Structural Group Unfairness: Measurement and Mitigation by means of the Effective Resistance
- Authors: Adrian Arnaiz-Rodriguez, Georgina Curto, Nuria Oliver,
- Abstract summary: Social networks contribute to the distribution of social capital, defined as the relationships, norms of trust and reciprocity within a community.
There is a lack of methods to quantify social capital at a group level, which is particularly important when the groups are defined on the grounds of protected attributes.
We introduce three effective resistance-based measures of group social capital, namely group isolation, group diameter and group control, where the groups are defined according to the value of a protected attribute.
- Score: 6.7454940931279666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social networks contribute to the distribution of social capital, defined as the relationships, norms of trust and reciprocity within a community or society that facilitate cooperation and collective action. Therefore, better positioned members in a social network benefit from faster access to diverse information and higher influence on information dissemination. A variety of methods have been proposed in the literature to measure social capital at an individual level. However, there is a lack of methods to quantify social capital at a group level, which is particularly important when the groups are defined on the grounds of protected attributes. To fill this gap, we propose to measure the social capital of a group of nodes by means of the effective resistance and emphasize the importance of considering the entire network topology. Grounded in spectral graph theory, we introduce three effective resistance-based measures of group social capital, namely group isolation, group diameter and group control, where the groups are defined according to the value of a protected attribute. We denote the social capital disparity among different groups in a network as structural group unfairness, and propose to mitigate it by means of a budgeted edge augmentation heuristic that systematically increases the social capital of the most disadvantaged group. In experiments on real-world networks, we uncover significant levels of structural group unfairness when using gender as the protected attribute, with females being the most disadvantaged group in comparison to males. We also illustrate how our proposed edge augmentation approach is able to not only effectively mitigate the structural group unfairness but also increase the social capital of all groups in the network.
Related papers
- Fairness Mediator: Neutralize Stereotype Associations to Mitigate Bias in Large Language Models [66.5536396328527]
LLMs inadvertently absorb spurious correlations from training data, leading to stereotype associations between biased concepts and specific social groups.
We propose Fairness Mediator (FairMed), a bias mitigation framework that neutralizes stereotype associations.
Our framework comprises two main components: a stereotype association prober and an adversarial debiasing neutralizer.
arXiv Detail & Related papers (2025-04-10T14:23:06Z) - Stronger together? The homophily trap in networks [0.0]
Homophily -- the tendency to link with similar others -- can hinder diversity and widen inequalities.
We show that homophily traps arise when minority size falls below 25% of a network.
Our work reveals that social groups require a critical size to benefit from homophily without incurring structural costs.
arXiv Detail & Related papers (2024-12-28T14:14:16Z) - How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance [64.1656365676171]
Group imbalance has been a known problem in empirical risk minimization.
This paper quantifies the impact of individual groups on the sample complexity, the convergence rate, and the average and group-level testing performance.
arXiv Detail & Related papers (2024-03-12T04:38:05Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - Group fairness without demographics using social networks [29.073125057536014]
Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability.
We propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks.
arXiv Detail & Related papers (2023-05-19T00:45:55Z) - Fair Information Spread on Social Networks with Community Structure [2.9613974659787132]
Influence maximiza- tion (IM) algorithms aim to identify individuals who will generate the greatest spread through the social network if provided with information.
This work relies on fitting a model to the social network which is then used to determine a seed allocation strategy for optimal fair information spread.
arXiv Detail & Related papers (2023-05-15T16:51:18Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Hunting Group Clues with Transformers for Social Group Activity
Recognition [3.1061678033205635]
Social group activity recognition requires recognizing multiple sub-group activities and identifying group members.
Most existing methods tackle both tasks by refining region features and then summarizing them into activity features.
We propose to leverage attention modules in transformers to generate effective social group features.
Our method is designed in such a way that the attention modules identify and then aggregate features relevant to social group activities.
arXiv Detail & Related papers (2022-07-12T01:46:46Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Group mixing drives inequality in face-to-face gatherings [2.7728956081909346]
We show that the way social groups interact in face-to-face situations can enable the emergence of disparities in the visibility of social groups.
We present a mechanism that explains these disparities as the result of group mixing and group-size imbalance.
arXiv Detail & Related papers (2021-06-22T11:44:02Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - GAEA: Graph Augmentation for Equitable Access via Reinforcement Learning [50.90625274621288]
Disparate access to resources by different subpopulations is a prevalent issue in societal and sociotechnical networks.
We introduce a new class of problems, Graph Augmentation for Equitable Access (GAEA), to enhance equity in networked systems by editing graph edges under budget constraints.
arXiv Detail & Related papers (2020-12-07T18:29:32Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.