Authenticity and exclusion: social media algorithms and the dynamics of belonging in epistemic communities
- URL: http://arxiv.org/abs/2407.08552v2
- Date: Mon, 21 Oct 2024 16:47:57 GMT
- Title: Authenticity and exclusion: social media algorithms and the dynamics of belonging in epistemic communities
- Authors: Nil-Jana Akpinar, Sina Fazelpour,
- Abstract summary: This paper examines how social media platforms and their recommendation algorithms shape the professional visibility and opportunities of researchers from minority groups.
Using agent-based simulations, we uncover three key patterns: First, these algorithms disproportionately harm the professional visibility of researchers from minority groups.
Second, within these minority groups, the algorithms result in greater visibility for users who more closely resemble the majority group, incentivizing assimilation at the cost of professional invisibility.
- Score: 0.8287206589886879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent philosophical work has explored how the social identity of knowers influences how their contributions are received, assessed, and credited. However, a critical gap remains regarding the role of technology in mediating and enabling communication within today's epistemic communities. This paper addresses this gap by examining how social media platforms and their recommendation algorithms shape the professional visibility and opportunities of researchers from minority groups. Using agent-based simulations, we investigate this question with respect to components of a widely used recommendation algorithm, and uncover three key patterns: First, these algorithms disproportionately harm the professional visibility of researchers from minority groups, creating systemic patterns of exclusion. Second, within these minority groups, the algorithms result in greater visibility for users who more closely resemble the majority group, incentivizing assimilation at the cost of professional invisibility. Third, even for topics that strongly align with minority identities, content created by minority researchers is less visible to the majority than similar content produced by majority users. Importantly, these patterns emerge, even though individual engagement with professional content is independent of group identity. These findings have significant implications for philosophical discussions on epistemic injustice and exclusion, and for policy proposals aimed at addressing these harms. More broadly, they call for a closer examination of the pervasive, but often neglected role of AI and data-driven technologies in shaping today's epistemic communities.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - From Melting Pots to Misrepresentations: Exploring Harms in Generative AI [3.167924351428519]
Concerns persist regarding discriminatory tendencies within advanced generative models such as Gemini and GPT.
Despite widespread calls for diversification of media representations, marginalized racial and ethnic groups continue to face persistent distortion, stereotyping, and neglect within the AI context.
arXiv Detail & Related papers (2024-03-16T02:29:42Z) - Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research [75.84463664853125]
We provide a first attempt to quantify concerns regarding three topics, namely, environmental impact, equity, and impact on peer reviewing.
We capture existing (dis)parities between different and within groups with respect to seniority, academia, and industry.
We devise recommendations to mitigate found disparities, some of which already successfully implemented.
arXiv Detail & Related papers (2023-06-29T12:44:53Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Estimating Topic Exposure for Under-Represented Users on Social Media [25.963970325207892]
This work focuses on highlighting the contributions of the engagers in the observed data.
The first step in behavioral analysis of these users is to find the topics they are exposed to but did not engage with.
We propose a novel framework that aids in identifying these users and estimates their topic exposure.
arXiv Detail & Related papers (2022-08-07T19:37:41Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of
Demographic Data Collection in the Pursuit of Fairness [0.0]
We consider calls to collect more data on demographics to enable algorithmic fairness.
We show how these techniques largely ignore broader questions of data governance and systemic oppression.
arXiv Detail & Related papers (2022-04-18T04:50:09Z) - Two-Face: Adversarial Audit of Commercial Face Recognition Systems [6.684965883341269]
Computer vision applications tend to be biased against minority groups which result in unfair and concerning societal and political outcomes.
We perform an extensive adversarial audit on multiple systems and datasets, making a number of concerning observations.
We conclude with a discussion on the broader societal impacts in light of these observations and a few suggestions on how to collectively deal with this issue.
arXiv Detail & Related papers (2021-11-17T14:21:23Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z) - No computation without representation: Avoiding data and algorithm
biases through diversity [11.12971845021808]
We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
arXiv Detail & Related papers (2020-02-26T23:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.