Content Moderation Futures
- URL: http://arxiv.org/abs/2509.09076v1
- Date: Thu, 11 Sep 2025 00:42:41 GMT
- Title: Content Moderation Futures
- Authors: Lindsay Blackwell,
- Abstract summary: This study examines the failures and possibilities of contemporary social media governance.<n>I argue that successful governance is undermined by the pursuit of technological novelty and rapid growth.
- Score: 0.3384279376065155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study examines the failures and possibilities of contemporary social media governance through the lived experiences of various content moderation professionals. Drawing on participatory design workshops with 33 practitioners in both the technology industry and broader civil society, this research identifies significant structural misalignments between corporate incentives and public interests. While experts agree that successful content moderation is principled, consistent, contextual, proactive, transparent, and accountable, current technology companies fail to achieve these goals, due in part to exploitative labor practices, chronic underinvestment in user safety, and pressures of global scale. I argue that successful governance is undermined by the pursuit of technological novelty and rapid growth, resulting in platforms that necessarily prioritize innovation and expansion over public trust and safety. To counter this dynamic, I revisit the computational history of care work, to motivate present-day solidarity amongst platform governance workers and inspire systemic change.
Related papers
- Navigating the Sociotechnical Imaginaries of Brazilian Tech Workers [0.0]
This chapter examines the sociotechnical imaginaries of Brazilian tech workers, a group often overlooked in digital labor research.<n>It argues that looking from the Global South helps challenge data universalism and foregrounds locally situated values, constraints, and futures.<n>The findings highlight recurring tensions between academic and industry discourse on algorithmic bias, the limits of corporate accountability regarding user harm and surveillance, and the contested meanings of digital sovereignty.
arXiv Detail & Related papers (2026-01-09T17:30:04Z) - Empowering Real-World: A Survey on the Technology, Practice, and Evaluation of LLM-driven Industry Agents [63.03252293761656]
This paper systematically reviews the technologies, applications, and evaluation methods of industry agents based on large language models (LLMs)<n>We examine the three key technological pillars that support the advancement of agent capabilities: Memory, Planning, and Tool Use.<n>We provide an overview of the application of industry agents in real-world domains such as digital engineering, scientific discovery, embodied intelligence, collaborative business execution, and complex system simulation.
arXiv Detail & Related papers (2025-10-20T12:46:55Z) - Sustainable and Adaptive Growth in Computing Education [0.0]
This paper introduces a new framework which addresses the question: How can computing education and professional development be connected to volatile sectors?<n>It integrates two iterative, interconnected cycles, an educational and a professional, by linking education with profession to establish a lifelong, renewable practice.
arXiv Detail & Related papers (2025-10-19T14:44:03Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Shaping a Profession, Building a Community: A Practitioner-Led Investigation of Public Interest Technologists in Civil Society [0.4779196219827508]
Public interest technology (PIT) is growing in popularity among technical practitioners working in civil society and nonprofit organizations.<n>This paper describes a mixed-methods study that characterizes technologists within the specific context of civil society, civil rights, and advocacy organizations in North America and Western Europe.
arXiv Detail & Related papers (2025-08-10T08:15:30Z) - Community Moderation and the New Epistemology of Fact Checking on Social Media [124.26693978503339]
Social media platforms have traditionally relied on independent fact-checking organizations to identify and flag misleading content.<n>X (formerly Twitter) and Meta have shifted towards community-driven content moderation by launching their own versions of crowd-sourced fact-checking.<n>We examine the current approaches to misinformation detection across major platforms, explore the emerging role of community-driven moderation, and critically evaluate both the promises and challenges of crowd-checking at scale.
arXiv Detail & Related papers (2025-05-26T14:50:18Z) - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective [377.2483044466149]
Generative Foundation Models (GenFMs) have emerged as transformative tools.<n>Their widespread adoption raises critical concerns regarding trustworthiness across dimensions.<n>This paper presents a comprehensive framework to address these challenges through three key contributions.
arXiv Detail & Related papers (2025-02-20T06:20:36Z) - Understanding the Factors Influencing Self-Managed Enterprises of Crowdworkers: A Comprehensive Review [49.623146117284115]
This paper investigates the shift in crowdsourcing towards self-managed enterprises of crowdworkers (SMECs)
It reviews the literature to understand the foundational aspects of this shift, focusing on identifying key factors that may explain the rise of SMECs.
The study aims to guide future research and inform policy and platform development, emphasizing the importance of fair labor practices in this evolving landscape.
arXiv Detail & Related papers (2024-03-19T14:33:16Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Techno-Utopians, Scammers, and Bullshitters: The Promise and Peril of
Web3 and Blockchain Technologies According to Operators and Venture Capital
Investors [1.8130068086063336]
Proponents and developers of Web3 and blockchain argue that these technologies can revolutionize how people live and work.
How technologists think about the technological future they hope to enable impacts the form technologies take, their potential benefits, and their potential harms.
We conducted semi-structured interviews with 29 operators and professional investors in the Web3 and blockchain field.
arXiv Detail & Related papers (2023-07-14T22:36:14Z) - Responsible and Inclusive Technology Framework: A Formative Framework to
Promote Societal Considerations in Information Technology Contexts [1.9991645269305982]
This paper contributes a formative framework -- the Responsible and Inclusive Technology Framework -- that orients critical reflection around the social contexts of technology creation and use.
We expect that the implementation of the Responsible and Inclusive Technology framework, especially in business-to-business industry settings, will serve as a catalyst for more intentional and socially-grounded practices.
arXiv Detail & Related papers (2023-02-22T18:59:04Z) - Designing for Human Rights in AI [0.0]
AI systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions.
It is becoming evident that these technological developments are consequential to people's fundamental human rights.
Technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context.
arXiv Detail & Related papers (2020-05-11T09:21:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.