The Disintegration of Free Speech
- URL: http://arxiv.org/abs/2603.00754v1
- Date: Sat, 28 Feb 2026 17:57:58 GMT
- Title: The Disintegration of Free Speech
- Authors: Yiyang Mei,
- Abstract summary: This Article examines the constitutional status of AI-mediated communication under the First Amendment.<n>It argues that under existing jurisprudence, AI-generated content is protected speech.<n>The Article concludes that this doctrinal trajectory risks severing the First Amendment from its democratic foundations.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This Article examines the constitutional status of AI-mediated communication under the First Amendment. Social media platforms, increasingly integrated with generative AI systems, now function as core public communication infrastructures. Within this environment, AI-generated pornography and large-scale political misinformation have produced significant dignitary and democratic harms. In response, states have enacted regulations requiring platforms to remove certain content, disclose recommendation practices, or redesign moderation systems. These measures, however, collide with prevailing First Amendment doctrine. The Article argues that under existing jurisprudence, AI-generated content is protected speech, and regulations targeting platform moderation practices are likely unconstitutional. Since the 1970s, the Supreme Court has shifted from a structural concern with the free circulation of information toward a strong protection of editorial autonomy, understood as control over authorship, expressive identity, and freedom from compelled attribution. Once content moderation is characterized as editorial judgment, regulatory mandates that compel or restrict such practices presumptively violate the Free Speech Clause. The Article concludes that this doctrinal trajectory risks severing the First Amendment from its democratic foundations and calls for a reconstruction attentive to automated content production, platform infrastructure, and concentrated communicative power.
Related papers
- Synthetic Voices, Real Threats: Evaluating Large Text-to-Speech Models in Generating Harmful Audio [63.18443674004945]
This work explores a content-centric threat: exploiting TTS systems to produce speech containing harmful content.<n>We present HARMGEN, a suite of five attacks organized into two families that address these challenges.
arXiv Detail & Related papers (2025-11-14T03:00:04Z) - Latent Topic Synthesis: Leveraging LLMs for Electoral Ad Analysis [51.95395936342771]
We introduce an end-to-end framework for automatically generating an interpretable topic taxonomy from an unlabeled corpus.<n>We apply this framework to a large corpus of Meta political ads from the month ahead of the 2024 U.S. Presidential election.<n>Our approach uncovers latent discourse structures, synthesizes semantically rich topic labels, and annotates topics with moral framing dimensions.
arXiv Detail & Related papers (2025-10-16T20:30:20Z) - Reclaiming Constitutional Authority of Algorithmic Power [0.0]
Whether and how to govern AI is no longer a question of technical regulation.<n>This Article reconstructs a constitutional framework grounded in covenantal authority and the right of lawful resistance.<n>Individuals retain a constitutional right to resist systems that impose orthodoxy or erode the domain of conscience.
arXiv Detail & Related papers (2025-08-12T23:46:30Z) - HatePRISM: Policies, Platforms, and Research Integration. Advancing NLP for Hate Speech Proactive Mitigation [67.69631485036665]
We conduct a comprehensive examination of hate speech regulations and strategies from three perspectives.<n>Our findings reveal significant inconsistencies in hate speech definitions and moderation practices across jurisdictions.<n>We suggest ideas and research direction for further exploration of a unified framework for automated hate speech moderation.
arXiv Detail & Related papers (2025-07-06T11:25:23Z) - Intentionally Unintentional: GenAI Exceptionalism and the First Amendment [9.330416981746971]
This paper challenges the assumption that courts should grant First Amendment protections to outputs from large generative AI models.<n>We argue that because these models lack intentionality, their outputs do not constitute speech as understood in the context of established legal precedent.
arXiv Detail & Related papers (2025-06-05T16:26:32Z) - The Model Hears You: Audio Language Model Deployments Should Consider the Principle of Least Privilege [48.18013944679755]
Latest Audio Language Models (Audio LMs) process speech directly instead of relying on a separate transcription step.<n>This shift preserves detailed information, such as intonation or the presence of multiple speakers, that would otherwise be lost in transcription.<n>It also introduces new safety risks, including the potential misuse of speaker identity cues and other sensitive vocal attributes.
arXiv Detail & Related papers (2025-03-21T04:03:59Z) - Generative AI as Digital Media [0.0]
Generative AI is frequently portrayed as revolutionary or even apocalyptic.<n>This essay argues that such views are misguided.<n>Instead, generative AI should be understood as an evolutionary step in the broader algorithmic media landscape.
arXiv Detail & Related papers (2025-03-09T08:58:17Z) - A Hate Speech Moderated Chat Application: Use Case for GDPR and DSA Compliance [0.0]
This research presents a novel application capable of implementing legal and ethical reasoning into the content moderation process.
Two use cases fundamental to online communication are presented and implemented using technologies such as GPT-3.5, Solid Pods, and the rule language Prova.
The work proposes a novel approach to reason within different legal and ethical definitions of hate speech and plan the fitting counter hate speech.
arXiv Detail & Related papers (2024-10-10T08:28:38Z) - Aligning AI with Public Values: Deliberation and Decision-Making for Governing Multimodal LLMs in Political Video Analysis [48.14390493099495]
How AI models should deal with political topics has been discussed, but it remains challenging and requires better governance.<n>This paper examines the governance of large language models through individual and collective deliberation, focusing on politically sensitive videos.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management [71.99446449877038]
We propose a more comprehensive approach called Demarcation scoring abusive speech based on four aspect -- (i) severity scale; (ii) presence of a target; (iii) context scale; (iv) legal scale.
Our work aims to inform future strategies for effectively addressing abusive speech online.
arXiv Detail & Related papers (2024-06-27T21:45:33Z) - The Unappreciated Role of Intent in Algorithmic Moderation of Social Media Content [2.2618341648062477]
This paper examines the role of intent in content moderation systems.
We review state of the art detection models and benchmark training datasets for online abuse to assess their awareness and ability to capture intent.
arXiv Detail & Related papers (2024-05-17T18:05:13Z) - Why Should This Article Be Deleted? Transparent Stance Detection in
Multilingual Wikipedia Editor Discussions [47.944081120226905]
We construct a novel dataset of Wikipedia editor discussions along with their reasoning in three languages.
The dataset contains the stances of the editors (keep, delete, merge, comment), along with the stated reason, and a content moderation policy, for each edit decision.
We demonstrate that stance and corresponding reason (policy) can be predicted jointly with a high degree of accuracy, adding transparency to the decision-making process.
arXiv Detail & Related papers (2023-10-09T15:11:02Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.