Along the Margins: Marginalized Communities' Ethical Concerns about
Social Platforms
- URL: http://arxiv.org/abs/2304.08882v2
- Date: Tue, 6 Feb 2024 12:17:47 GMT
- Title: Along the Margins: Marginalized Communities' Ethical Concerns about
Social Platforms
- Authors: Lauren Olson and Emitz\'a Guzm\'an and Florian Kunneman
- Abstract summary: We identified marginalized communities' ethical concerns about social platforms.
Recent platform malfeasance indicates that software teams prioritize shareholder concerns over user concerns.
We found that marginalized communities' ethical concerns predominantly revolve around discrimination and misrepresentation.
- Score: 3.357853336791203
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we identified marginalized communities' ethical concerns about
social platforms. We performed this identification because recent platform
malfeasance indicates that software teams prioritize shareholder concerns over
user concerns. Additionally, these platform shortcomings often have devastating
effects on marginalized populations. We first scraped 586 marginalized
communities' subreddits, aggregated a dataset of their social platform mentions
and manually annotated mentions of ethical concerns in these data. We
subsequently analyzed trends in the manually annotated data and tested the
extent to which ethical concerns can be automatically classified by means of
natural language processing (NLP). We found that marginalized communities'
ethical concerns predominantly revolve around discrimination and
misrepresentation, and reveal deficiencies in current software development
practices. As such, researchers and developers could use our work to further
investigate these concerns and rectify current software flaws.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Crossing Margins: Intersectional Users' Ethical Concerns about Software [3.0830895408549583]
This work aims to fill the gap in research on intersectional users' software-related perspectives.
We collected posts from over 700 intersectional subreddits discussing software applications.
Our findings revealed that intersectional communities report textitcritical complaints related to cyberbullying, inappropriate content, and discrimination.
arXiv Detail & Related papers (2024-10-10T16:33:05Z) - Ethical Challenges in Computer Vision: Ensuring Privacy and Mitigating Bias in Publicly Available Datasets [0.0]
This paper aims to shed light on the ethical problems of creating and deploying computer vision tech.
Computer vision has become a vital tool in many industries, including medical care, security systems, and trade.
arXiv Detail & Related papers (2024-08-31T00:59:29Z) - The Call for Socially Aware Language Technologies [94.6762219597438]
We argue that many of these issues share a common core: a lack of awareness of the factors, context, and implications of the social environment in which NLP operates.
We argue that substantial challenges remain for NLP to develop social awareness and that we are just at the beginning of a new era for the field.
arXiv Detail & Related papers (2024-05-03T18:12:39Z) - Eagle: Ethical Dataset Given from Real Interactions [74.7319697510621]
We create datasets extracted from real interactions between ChatGPT and users that exhibit social biases, toxicity, and immoral problems.
Our experiments show that Eagle captures complementary aspects, not covered by existing datasets proposed for evaluation and mitigation of such ethical challenges.
arXiv Detail & Related papers (2024-02-22T03:46:02Z) - A Keyword Based Approach to Understanding the Overpenalization of
Marginalized Groups by English Marginal Abuse Models on Twitter [2.9604738405097333]
Harmful content detection models tend to have higher false positive rates for content from marginalized groups.
We propose a principled approach to detecting and measuring the severity of potential harms associated with a text-based model.
We apply our methodology to audit Twitter's English marginal abuse model, which is used for removing amplification eligibility of marginally abusive content.
arXiv Detail & Related papers (2022-10-07T20:28:00Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z) - ETHOS: an Online Hate Speech Detection Dataset [6.59720246184989]
We present 'ETHOS', a textual dataset with two variants: binary and multi-label, based on YouTube and Reddit comments validated using the Figure-Eight crowdsourcing platform.
Our key assumption is that, even gaining a small amount of labelled data from such a time-consuming process, we can guarantee hate speech occurrences in the examined material.
arXiv Detail & Related papers (2020-06-11T08:59:57Z) - Whose Side are Ethics Codes On? Power, Responsibility and the Social
Good [0.0]
We argue that ethics codes that elevate consumers may simultaneously subordinate the needs of vulnerable populations.
We introduce the concept of digital differential vulnerability to explain disproportionate exposures to harm within data technology.
arXiv Detail & Related papers (2020-02-04T22:05:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.