Group mixing drives inequality in face-to-face gatherings
- URL: http://arxiv.org/abs/2106.11688v2
- Date: Wed, 16 Mar 2022 09:47:36 GMT
- Title: Group mixing drives inequality in face-to-face gatherings
- Authors: Marcos Oliveira, Fariba Karimi, Maria Zens, Johann Schaible, Mathieu
G\'enois, Markus Strohmaier
- Abstract summary: We show that the way social groups interact in face-to-face situations can enable the emergence of disparities in the visibility of social groups.
We present a mechanism that explains these disparities as the result of group mixing and group-size imbalance.
- Score: 2.7728956081909346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncovering how inequality emerges from human interaction is imperative for
just societies. Here we show that the way social groups interact in
face-to-face situations can enable the emergence of disparities in the
visibility of social groups. These disparities translate into members of
specific social groups having fewer social ties than the average (i.e., degree
inequality). We characterize group degree inequality in sensor-based data sets
and present a mechanism that explains these disparities as the result of group
mixing and group-size imbalance. We investigate how group sizes affect this
inequality, thereby uncovering the critical size and mixing conditions in that
a critical minority group emerges. If a minority group is larger than this
critical size, it can be a well-connected, cohesive group; if it is smaller,
minority cohesion widens degree inequality. Finally, we expose the
under-representation of individuals in degree rankings due to mixing dynamics
and propose a way to reduce such biases.
Related papers
- Stronger together? The homophily trap in networks [0.0]
Homophily -- the tendency to link with similar others -- can hinder diversity and widen inequalities.
We show that homophily traps arise when minority size falls below 25% of a network.
Our work reveals that social groups require a critical size to benefit from homophily without incurring structural costs.
arXiv Detail & Related papers (2024-12-28T14:14:16Z) - Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans [0.30723404270319693]
We investigate a new form of bias in large language models (LLMs)
We find that ChatGPT portrayed African, Asian, and Hispanic Americans as more homogeneous than White Americans.
We argue that the tendency to describe groups as less diverse risks perpetuating stereotypes and discriminatory behavior.
arXiv Detail & Related papers (2024-01-16T16:52:00Z) - Group fairness without demographics using social networks [29.073125057536014]
Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability.
We propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks.
arXiv Detail & Related papers (2023-05-19T00:45:55Z) - Structural Group Unfairness: Measurement and Mitigation by means of the Effective Resistance [6.7454940931279666]
Social networks contribute to the distribution of social capital, defined as the relationships, norms of trust and reciprocity within a community.
There is a lack of methods to quantify social capital at a group level, which is particularly important when the groups are defined on the grounds of protected attributes.
We introduce three effective resistance-based measures of group social capital, namely group isolation, group diameter and group control, where the groups are defined according to the value of a protected attribute.
arXiv Detail & Related papers (2023-05-05T00:57:55Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Outlier-Robust Group Inference via Gradient Space Clustering [50.87474101594732]
Existing methods can improve the worst-group performance, but they require group annotations, which are often expensive and sometimes infeasible to obtain.
We address the problem of learning group annotations in the presence of outliers by clustering the data in the space of gradients of the model parameters.
We show that data in the gradient space has a simpler structure while preserving information about minority groups and outliers, making it suitable for standard clustering methods like DBSCAN.
arXiv Detail & Related papers (2022-10-13T06:04:43Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z) - Contrastive Examples for Addressing the Tyranny of the Majority [83.93825214500131]
We propose to create a balanced training dataset, consisting of the original dataset plus new data points in which the group memberships are intervened.
We show that current generative adversarial networks are a powerful tool for learning these data points, called contrastive examples.
arXiv Detail & Related papers (2020-04-14T14:06:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.