Freedom of Speech and AI Output
- URL: http://arxiv.org/abs/2308.08673v1
- Date: Wed, 16 Aug 2023 20:49:41 GMT
- Title: Freedom of Speech and AI Output
- Authors: Eugene Volokh, Mark Lemley, Peter Henderson
- Abstract summary: Even though current AI programs are of course not people and do not themselves have constitutional rights, their speech may potentially be protected because of the rights of the programs' creators.
But beyond that, and likely more significantly, AI programs' speech should be protected because of the rights of their users.
- Score: 5.542416076785831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Is the output of generative AI entitled to First Amendment protection? We're
inclined to say yes. Even though current AI programs are of course not people
and do not themselves have constitutional rights, their speech may potentially
be protected because of the rights of the programs' creators. But beyond that,
and likely more significantly, AI programs' speech should be protected because
of the rights of their users-both the users' rights to listen and their rights
to speak. In this short Article, we sketch the outlines of this analysis.
Related papers
- The algorithmic muse and the public domain: Why copyrights legal philosophy precludes protection for generative AI outputs [0.0]
Generative AI (GenAI) outputs are not copyrightable.<n>GenAI fundamentally severs the direct human creative link to expressive form.<n>The paper advocates for a clear distinction: human creative contributions to AI-generated works may warrant protection, but the raw algorithmic output should remain in the public domain.
arXiv Detail & Related papers (2025-12-15T05:39:30Z) - Can Media Act as a Soft Regulator of Safe AI Development? A Game Theoretical Analysis [57.68073583427415]
We study whether media coverage has the potential to push AI creators into the production of safe products.<n>Our results reveal that media is indeed able to foster cooperation between creators and users, but not always.<n>By shaping public perception and holding developers accountable, media emerges as a powerful soft regulator.
arXiv Detail & Related papers (2025-09-02T12:13:34Z) - Intentionally Unintentional: GenAI Exceptionalism and the First Amendment [9.330416981746971]
This paper challenges the assumption that courts should grant First Amendment protections to outputs from large generative AI models.<n>We argue that because these models lack intentionality, their outputs do not constitute speech as understood in the context of established legal precedent.
arXiv Detail & Related papers (2025-06-05T16:26:32Z) - Giving AI a voice: how does AI think it should be treated? [0.0]
This chapter includes a brief human-AI conversation on the topic of AI rights and ethics.
There are new questions and angles that AI brings to the table that we might not have considered before.
arXiv Detail & Related papers (2025-04-21T07:59:17Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - A+AI: Threats to Society, Remedies, and Governance [0.0]
This document focuses on the threats, especially near-term threats, that Artificial Intelligence (AI) brings to society.
It includes a table showing which countermeasures are likely to mitigate which threats.
The paper lists specific actions government should take as soon as possible.
arXiv Detail & Related papers (2024-09-03T18:43:47Z) - Deceptive uses of Artificial Intelligence in elections strengthen support for AI ban [44.99833362998488]
We propose a framework for assessing AI's impact on elections.
We group AI-enabled campaigning uses into three categories -- campaign operations, voter outreach, and deception.
We provide the first systematic evidence from a preregistered representative survey.
arXiv Detail & Related papers (2024-08-08T12:58:20Z) - Debunking Robot Rights Metaphysically, Ethically, and Legally [0.10241134756773229]
We argue that machines are not the kinds of things that may be denied or granted rights.
From a legal perspective, the best analogy to robot rights is not human rights but corporate rights.
arXiv Detail & Related papers (2024-04-15T18:23:58Z) - Copyright Protection in Generative AI: A Technical Perspective [58.84343394349887]
Generative AI has witnessed rapid advancement in recent years, expanding their capabilities to create synthesized content such as text, images, audio, and code.
The high fidelity and authenticity of contents generated by these Deep Generative Models (DGMs) have sparked significant copyright concerns.
This work delves into this issue by providing a comprehensive overview of copyright protection from a technical perspective.
arXiv Detail & Related papers (2024-02-04T04:00:33Z) - ChatGPT and Works Scholarly: Best Practices and Legal Pitfalls in
Writing with AI [9.550238260901121]
We offer approaches to evaluating whether or not such AI-writing violates copyright or falls within the safe harbor of fair use.
As AI is likely to grow more capable in the coming years, it is appropriate to begin integrating AI into scholarly writing activities.
arXiv Detail & Related papers (2023-05-04T15:38:20Z) - Training Is Everything: Artificial Intelligence, Copyright, and Fair
Training [9.653656920225858]
Authors: Companies that use such content to train their AI engine often believe such usage should be considered "fair use"
Authors: Copyright owners, as well as their supporters, consider the incorporation of copyrighted works into training sets for AI to constitute misappropriation of owners' intellectual property.
We identify both strong and spurious arguments on both sides of this debate.
arXiv Detail & Related papers (2023-05-04T04:01:00Z) - Emotion Selectable End-to-End Text-based Speech Editing [63.346825713704625]
Emo-CampNet (emotion CampNet) is an emotion-selectable text-based speech editing model.
It can effectively control the emotion of the generated speech in the process of text-based speech editing.
It can also edit unseen speakers' speech.
arXiv Detail & Related papers (2022-12-20T12:02:40Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Collecting the Public Perception of AI and Robot Rights [10.791267046450077]
The European Parliament proposed advanced robots could be granted "electronic personalities"
This paper collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future.
arXiv Detail & Related papers (2020-08-04T05:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.