Authenticity and exclusion: social media recommendation algorithms and the dynamics of belonging in professional networks
- URL: http://arxiv.org/abs/2407.08552v1
- Date: Thu, 11 Jul 2024 14:36:58 GMT
- Title: Authenticity and exclusion: social media recommendation algorithms and the dynamics of belonging in professional networks
- Authors: Nil-Jana Akpinar, Sina Fazelpour,
- Abstract summary: Homophily profoundly influences social interactions, affecting associations, information disclosure, and the dynamics of social exchanges.
How might the nature and design of social media platforms, where different conversational contexts frequently collapse, impact these dynamics?
Our findings indicate a decline in the visibility of professional content generated by minority groups, a trend that is exacerbated over time by recommendation algorithms.
- Score: 0.8287206589886879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Homophily - the attraction of similarity - profoundly influences social interactions, affecting associations, information disclosure, and the dynamics of social exchanges. Organizational studies reveal that when professional and personal boundaries overlap, individuals from minority backgrounds often encounter a dilemma between authenticity and inclusion due to these homophily-driven dynamics: if they disclose their genuine interests, they risk exclusion from the broader conversation. Conversely, to gain inclusion, they might feel pressured to assimilate. How might the nature and design of social media platforms, where different conversational contexts frequently collapse, and the recommender algorithms that are at the heart of these platforms, which can prioritize content based on network structure and historical user engagement, impact these dynamics? In this paper, we employ agent-based simulations to investigate this question. Our findings indicate a decline in the visibility of professional content generated by minority groups, a trend that is exacerbated over time by recommendation algorithms. Within these minority communities, users who closely resemble the majority group tend to receive greater visibility. We examine the philosophical and design implications of our results, discussing their relevance to questions of informational justice, inclusion, and the epistemic benefits of diversity.
Related papers
- An Empirical Study of Group Conformity in Multi-Agent Systems [0.26999000177990923]
This study explores how Large Language Models (LLMs) agents shape public opinion through debates on five contentious topics.<n>By simulating over 2,500 debates, we analyze how initially neutral agents, assigned a centrist disposition, adopt specific stances over time.
arXiv Detail & Related papers (2025-06-02T05:22:29Z) - Review of Demographic Fairness in Face Recognition [2.7624021966289605]
Review consolidates research efforts providing a comprehensive overview of the multifaceted aspects of demographic fairness in FR.
We examine the primary causes, datasets, assessment metrics, and mitigation approaches associated with demographic disparities in FR.
We highlight current advancements and identify emerging challenges that need further investigation.
arXiv Detail & Related papers (2025-02-04T13:28:49Z) - Emergence of human-like polarization among large language model agents [61.622596148368906]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.
Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate it.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - From Melting Pots to Misrepresentations: Exploring Harms in Generative AI [3.167924351428519]
Concerns persist regarding discriminatory tendencies within advanced generative models such as Gemini and GPT.
Despite widespread calls for diversification of media representations, marginalized racial and ethnic groups continue to face persistent distortion, stereotyping, and neglect within the AI context.
arXiv Detail & Related papers (2024-03-16T02:29:42Z) - Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research [75.84463664853125]
We provide a first attempt to quantify concerns regarding three topics, namely, environmental impact, equity, and impact on peer reviewing.
We capture existing (dis)parities between different and within groups with respect to seniority, academia, and industry.
We devise recommendations to mitigate found disparities, some of which already successfully implemented.
arXiv Detail & Related papers (2023-06-29T12:44:53Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Estimating Topic Exposure for Under-Represented Users on Social Media [25.963970325207892]
This work focuses on highlighting the contributions of the engagers in the observed data.
The first step in behavioral analysis of these users is to find the topics they are exposed to but did not engage with.
We propose a novel framework that aids in identifying these users and estimates their topic exposure.
arXiv Detail & Related papers (2022-08-07T19:37:41Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of
Demographic Data Collection in the Pursuit of Fairness [0.0]
We consider calls to collect more data on demographics to enable algorithmic fairness.
We show how these techniques largely ignore broader questions of data governance and systemic oppression.
arXiv Detail & Related papers (2022-04-18T04:50:09Z) - Two-Face: Adversarial Audit of Commercial Face Recognition Systems [6.684965883341269]
Computer vision applications tend to be biased against minority groups which result in unfair and concerning societal and political outcomes.
We perform an extensive adversarial audit on multiple systems and datasets, making a number of concerning observations.
We conclude with a discussion on the broader societal impacts in light of these observations and a few suggestions on how to collectively deal with this issue.
arXiv Detail & Related papers (2021-11-17T14:21:23Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z) - No computation without representation: Avoiding data and algorithm
biases through diversity [11.12971845021808]
We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
arXiv Detail & Related papers (2020-02-26T23:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.