Trustworthy AIGC Copyright Management with Full Lifecycle Recording and Multi-party Supervision in Blockchain
- URL: http://arxiv.org/abs/2406.14966v2
- Date: Wed, 12 Mar 2025 06:33:02 GMT
- Title: Trustworthy AIGC Copyright Management with Full Lifecycle Recording and Multi-party Supervision in Blockchain
- Authors: Jiajia Jiang, Moting Su, Fengshu Li, Xiangli Xiao, Yushu Zhang,
- Abstract summary: The current legal system for copyright is built around human creators, yet in the realm of AIGC, the role of humans in content creation has diminished.<n>It is necessary to meticulously record contributions of all entities involved in the generation of AIGC to achieve a fair distribution of copyright.<n>This study thoroughly records the intermediate data generated throughout the full lifecycle of AIGC and deposits them into a decentralized blockchain system for secure multi-party supervision.
- Score: 10.015046490760199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As artificial intelligence technology becomes increasingly widespread, AI-generated content (AIGC) is gradually penetrating into many fields. Although AIGC plays an increasingly prominent role in business and cultural communication, the issue of copyright has also triggered widespread social discussion. The current legal system for copyright is built around human creators, yet in the realm of AIGC, the role of humans in content creation has diminished, with the creative expression primarily reliant on artificial intelligence. This discrepancy has led to numerous complexities and challenges in determining the copyright ownership of AIGC within the established legal boundaries. In view of this, it is necessary to meticulously record contributions of all entities involved in the generation of AIGC to achieve a fair distribution of copyright. For this purpose, this study thoroughly records the intermediate data generated throughout the full lifecycle of AIGC and deposits them into a decentralized blockchain system for secure multi-party supervision, thereby constructing a trustworthy AIGC copyright management system. In the event of copyright disputes, auditors can retrieve valuable proof from the blockchain, accurately defining the copyright ownership of AIGC products. Both theoretical and experimental analyses confirm that this scheme shows exceptional performance and security in the management of AIGC copyrights.
Related papers
- Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents [61.132523071109354]
This paper investigates the interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios.
Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" stances than pure game-theoretic agents.
arXiv Detail & Related papers (2025-04-11T15:41:21Z) - Content ARCs: Decentralized Content Rights in the Age of Generative AI [14.208062688463524]
This paper proposes a framework called emphContent ARCs (Authenticity, Rights, Compensation)
By combining open standards for provenance and dynamic licensing with data attribution, Content ARCs create a mechanism for managing rights and compensating creators for using their work in AI training.
We characterize several nascent works in the AI data licensing space within Content ARCs and identify where challenges remain to fully implement the end-to-end framework.
arXiv Detail & Related papers (2025-03-14T11:57:08Z) - Copyright in AI-generated works: Lessons from recent developments in patent law [0.0]
In Thaler v The Comptroller-General of Patents, Designs and Trade Marks, Smith J. held that an AI owner can claim patent ownership over an AI-generated invention based on their ownership and control of the AI system.
This paper examines whether the AI-owner approach is a better option for determining copyright ownership of AI-generated works.
arXiv Detail & Related papers (2025-02-04T04:16:44Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - ©Plug-in Authorization for Human Content Copyright Protection in Text-to-Image Model [71.47762442337948]
State-of-the-art models create high-quality content without crediting original creators.
We propose the copyright Plug-in Authorization framework, introducing three operations: addition, extraction, and combination.
Extraction allows creators to reclaim copyright from infringing models, and combination enables users to merge different copyright plug-ins.
arXiv Detail & Related papers (2024-04-18T07:48:00Z) - Uncertain Boundaries: Multidisciplinary Approaches to Copyright Issues in Generative AI [2.669847575321326]
The survey aims to stay abreast of the latest developments and open problems.
It will first outline methods of detecting copyright infringement in mediums such as text, image, and video.
Next, it will delve an exploration of existing techniques aimed at safeguarding copyrighted works from generative models.
arXiv Detail & Related papers (2024-03-31T22:10:01Z) - Prompting the E-Brushes: Users as Authors in Generative AI [0.0]
The Copyright Office, in its March 2023 Guidance, argues against users of Generative AI being eligible for copyright protection.
This Article challenges this viewpoint and advocates for the recognition of Generative AI users who incorporate these tools into their creative endeavors.
Rather than dismissing the contributions generated by AI, this Article suggests a simplified and streamlined registration process.
arXiv Detail & Related papers (2024-03-25T02:20:14Z) - Generative AI and Copyright: A Dynamic Perspective [0.0]
generative AI is poised to disrupt the creative industry.
The compensation to creators whose content has been used to train generative AI models (the fair use standard) and the eligibility of AI-generated content for copyright protection (AI-copyrightability) are key issues.
This paper aims to better understand the economic implications of these two regulatory issues and their interactions.
arXiv Detail & Related papers (2024-02-27T07:12:48Z) - Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis,
Public Perception and Implications [4.959125079708047]
AIGC copyright dilemma can immensely stifle the development of AIGC and greatly cost the entire society.
Previous work advocated copyleft on AI governance but without substantive analysis.
Key findings include: a) people generally perceive the dilemma, b) they prefer to use authorized AIGC under loose restriction, and c) they are positive to copyleft in AIGC and willing to use it in the future.
arXiv Detail & Related papers (2024-02-19T15:20:35Z) - Copyright Protection in Generative AI: A Technical Perspective [58.84343394349887]
Generative AI has witnessed rapid advancement in recent years, expanding their capabilities to create synthesized content such as text, images, audio, and code.
The high fidelity and authenticity of contents generated by these Deep Generative Models (DGMs) have sparked significant copyright concerns.
This work delves into this issue by providing a comprehensive overview of copyright protection from a technical perspective.
arXiv Detail & Related papers (2024-02-04T04:00:33Z) - Training Is Everything: Artificial Intelligence, Copyright, and Fair
Training [9.653656920225858]
Authors: Companies that use such content to train their AI engine often believe such usage should be considered "fair use"
Authors: Copyright owners, as well as their supporters, consider the incorporation of copyrighted works into training sets for AI to constitute misappropriation of owners' intellectual property.
We identify both strong and spurious arguments on both sides of this debate.
arXiv Detail & Related papers (2023-05-04T04:01:00Z) - A Pathway Towards Responsible AI Generated Content [68.13835802977125]
We focus on 8 main concerns that may hinder the healthy development and deployment of AIGC in practice.
These concerns include risks from (1) privacy; (2) bias, toxicity, misinformation; (3) intellectual property (IP); (4) robustness; (5) open source and explanation; (6) technology abuse; (7) consent, credit, and compensation; (8) environment.
arXiv Detail & Related papers (2023-03-02T14:58:40Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.