EKILA: Synthetic Media Provenance and Attribution for Generative Art
- URL: http://arxiv.org/abs/2304.04639v1
- Date: Mon, 10 Apr 2023 15:11:26 GMT
- Title: EKILA: Synthetic Media Provenance and Attribution for Generative Art
- Authors: Kar Balan, Shruti Agarwal, Simon Jenni, Andy Parsons, Andrew Gilbert,
John Collomosse
- Abstract summary: EKILA is a decentralized framework that enables creatives to receive recognition and reward for their contributions to generative AI (GenAI)
EKILA proposes a robust visual attribution technique and combines this with an emerging content provenance standard (C2PA)
EKILA extends the non-fungible token (NFT) ecosystem to introduce a tokenized representation for rights.
- Score: 18.02055922311379
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present EKILA; a decentralized framework that enables creatives to receive
recognition and reward for their contributions to generative AI (GenAI). EKILA
proposes a robust visual attribution technique and combines this with an
emerging content provenance standard (C2PA) to address the problem of synthetic
image provenance -- determining the generative model and training data
responsible for an AI-generated image. Furthermore, EKILA extends the
non-fungible token (NFT) ecosystem to introduce a tokenized representation for
rights, enabling a triangular relationship between the asset's Ownership,
Rights, and Attribution (ORA). Leveraging the ORA relationship enables creators
to express agency over training consent and, through our attribution model, to
receive apportioned credit, including royalty payments for the use of their
assets in GenAI.
Related papers
- Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - An Economic Solution to Copyright Challenges of Generative AI [35.37023083413299]
Generative artificial intelligence systems are trained to generate new pieces of text, images, videos, and other media.
There is growing concern that such systems may infringe on the copyright interests of training data contributors.
We propose a framework that compensates copyright owners proportionally to their contributions to the creation of AI-generated content.
arXiv Detail & Related papers (2024-04-22T08:10:38Z) - Ethical-Lens: Curbing Malicious Usages of Open-Source Text-to-Image Models [51.69735366140249]
We introduce Ethical-Lens, a framework designed to facilitate the value-aligned usage of text-to-image tools.
Ethical-Lens ensures value alignment in text-to-image models across toxicity and bias dimensions.
Our experiments reveal that Ethical-Lens enhances alignment capabilities to levels comparable with or superior to commercial models.
arXiv Detail & Related papers (2024-04-18T11:38:25Z) - ©Plug-in Authorization for Human Content Copyright Protection in Text-to-Image Model [71.47762442337948]
State-of-the-art models create high-quality content without crediting original creators.
We propose the copyright Plug-in Authorization framework, introducing three operations: addition, extraction, and combination.
Extraction allows creators to reclaim copyright from infringing models, and combination enables users to merge different copyright plug-ins.
arXiv Detail & Related papers (2024-04-18T07:48:00Z) - Not All Similarities Are Created Equal: Leveraging Data-Driven Biases to Inform GenAI Copyright Disputes [20.237329910319293]
This paper introduces a novel approach that leverages the learning capacity of GenAI models for copyright legal analysis.
We propose a data-driven approach to identify the genericity of works created by GenAI.
The potential implications of measuring expressive genericity for copyright law are profound.
arXiv Detail & Related papers (2024-03-26T13:32:32Z) - Data Equity: Foundational Concepts for Generative AI [0.0]
GenAI promises immense potential to drive digital and social innovation.
GenAI has the potential to democratize access and usage of technologies.
However, left unchecked, it could deepen inequities.
arXiv Detail & Related papers (2023-10-27T05:19:31Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - DECORAIT -- DECentralized Opt-in/out Registry for AI Training [20.683704089165406]
We present DECORAIT; a decentralized registry through which content creators may assert their right to opt in or out of AI training.
GenAI enables images to be synthesized using AI models trained on vast amounts of data scraped from public sources.
arXiv Detail & Related papers (2023-09-25T16:19:35Z) - Semantic Communications for Artificial Intelligence Generated Content
(AIGC) Toward Effective Content Creation [75.73229320559996]
This paper develops a conceptual model for the integration of AIGC and SemCom.
A novel framework that employs AIGC technology is proposed as an encoder and decoder for semantic information.
The framework can adapt to different types of content generated, the required quality, and the semantic information utilized.
arXiv Detail & Related papers (2023-08-09T13:17:21Z) - Guiding AI-Generated Digital Content with Wireless Perception [69.51950037942518]
We introduce an integration of wireless perception with AI-generated content (AIGC) to improve the quality of digital content production.
The framework employs a novel multi-scale perception technology to read user's posture, which is difficult to describe accurately in words, and transmits it to the AIGC model as skeleton images.
Since the production process imposes the user's posture as a constraint on the AIGC model, it makes the generated content more aligned with the user's requirements.
arXiv Detail & Related papers (2023-03-26T04:39:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.